From: "NeilBrown" <neilb@suse.de>
To: "Dave Chinner" <david@fromorbit.com>
Cc: "Jeff Layton" <jlayton@kernel.org>,
"J. Bruce Fields" <bfields@fieldses.org>,
"Theodore Ts'o" <tytso@mit.edu>, "Jan Kara" <jack@suse.cz>,
adilger.kernel@dilger.ca, djwong@kernel.org,
trondmy@hammerspace.com, viro@zeniv.linux.org.uk,
zohar@linux.ibm.com, xiubli@redhat.com, chuck.lever@oracle.com,
lczerner@redhat.com, brauner@kernel.org, fweimer@redhat.com,
linux-man@vger.kernel.org, linux-api@vger.kernel.org,
linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-kernel@vger.kernel.org, ceph-devel@vger.kernel.org,
linux-ext4@vger.kernel.org, linux-nfs@vger.kernel.org,
linux-xfs@vger.kernel.org
Subject: Re: [man-pages RFC PATCH v4] statx, inode: document the new STATX_INO_VERSION field
Date: Tue, 13 Sep 2022 11:49:03 +1000 [thread overview]
Message-ID: <166303374350.30452.17386582960615006566@noble.neil.brown.name> (raw)
In-Reply-To: <20220913004146.GD3600936@dread.disaster.area>
On Tue, 13 Sep 2022, Dave Chinner wrote:
> On Mon, Sep 12, 2022 at 07:42:16AM -0400, Jeff Layton wrote:
> > On Sat, 2022-09-10 at 10:56 -0400, J. Bruce Fields wrote:
> > > On Fri, Sep 09, 2022 at 12:36:29PM -0400, Jeff Layton wrote:
> > > Our goal is to ensure that after a crash, any *new* i_versions that we
> > > give out or write to disk are larger than any that have previously been
> > > given out. We can do that by ensuring that they're equal to at least
> > > that old maximum.
> > >
> > > So think of the 64-bit value we're storing in the superblock as a
> > > ceiling on i_version values across all the filesystem's inodes. Call it
> > > s_version_max or something. We also need to know what the maximum was
> > > before the most recent crash. Call that s_version_max_old.
> > >
> > > Then we could get correct behavior if we generated i_versions with
> > > something like:
> > >
> > > i_version++;
> > > if (i_version < s_version_max_old)
> > > i_version = s_version_max_old;
> > > if (i_version > s_version_max)
> > > s_version_max = i_version + 1;
> > >
> > > But that last step makes this ludicrously expensive, because for this to
> > > be safe across crashes we need to update that value on disk as well, and
> > > we need to do that frequently.
> > >
> > > Fortunately, s_version_max doesn't have to be a tight bound at all. We
> > > can easily just initialize it to, say, 2^40, and only bump it by 2^40 at
> > > a time. And recognize when we're running up against it way ahead of
> > > time, so we only need to say "here's an updated value, could you please
> > > make sure it gets to disk sometime in the next twenty minutes"?
> > > (Numbers made up.)
> > >
> > > Sorry, that was way too many words. But I think something like that
> > > could work, and make it very difficult to hit any hard limits, and
> > > actually not be too complicated?? Unless I missed something.
> > >
> >
> > That's not too many words -- I appreciate a good "for dummies"
> > explanation!
> >
> > A scheme like that could work. It might be hard to do it without a
> > spinlock or something, but maybe that's ok. Thinking more about how we'd
> > implement this in the underlying filesystems:
> >
> > To do this we'd need 2 64-bit fields in the on-disk and in-memory
> > superblocks for ext4, xfs and btrfs. On the first mount after a crash,
> > the filesystem would need to bump s_version_max by the significant
> > increment (2^40 bits or whatever). On a "clean" mount, it wouldn't need
> > to do that.
>
> Why only increment on crash? If the filesystem has been unmounted,
> then any cached data is -stale- and must be discarded. e.g. unmount,
> run fsck which cleans up corrupt files but does not modify
> i_version, then mount. Remote caches are now invalid, but i_version
> may not have changed, so we still need the clean unmount-mount cycle
> to invalidate caches.
I disagree. We do need fsck to cause caches to be invalidated IF IT
FOUND SOMETHING TO REPAIR, but not if the filesystem was truely clean.
>
> IOWs, what we want is a salted i_version value, with the filesystem
> providing the unique per-mount salt that gets added to the
> externally visible i_version values.
I agree this is a simple approach. Possible the best.
>
> If that's the case, the salt doesn't need to be restricted to just
> modifying the upper bits - as long as the salt increments
> substantially and independently to the on-disk inode i_version then
> we just don't care what bits of the superblock salt change from
> mount to mount.
>
> For XFS we already have a unique 64 bit salt we could use for every
> mount - clean or unclean - and guarantee it is larger for every
> mount. It also gets substantially bumped by fsck, too. It's called a
> Log Sequence Number and we use them to track and strictly order
> every modification we write into the log. This is exactly what is
> needed for a i_version salt, and it's already guaranteed to be
> persistent.
Invalidating the client cache on EVERY unmount/mount could impose
unnecessary cost. Imagine a client that caches a lot of data (several
large files) from a server which is expected to fail-over from one
cluster node to another from time to time. Adding extra delays to a
fail-over is not likely to be well received.
I don't *know* this cost would be unacceptable, and I *would* like to
leave it to the filesystem to decide how to manage its own i_version
values. So maybe XFS can use the LSN for a salt. If people notice the
extra cost, they can complain.
Thanks,
NeilBrown
>
> > Would there be a way to ensure that the new s_version_max value has made
> > it to disk?
>
> Yes, but that's not really relevant to the definition of the salt:
> we don't need to design the filesystem implementation of a
> persistent per-mount salt value. All we need is to define the
> behaviour of the salt (e.g. must always increase across a
> umount/mount cycle) and then you can let the filesystem developers
> worry about how to provide the required salt behaviour and it's
> persistence.
>
> In the mean time, you can implement the salting and testing it by
> using the system time to seed the superblock salt - that's good
> enough for proof of concept, and as a fallback for filesystems that
> cannot provide the required per-mount salt persistence....
>
> > Bumping it by a large value and hoping for the best might be
> > ok for most cases, but there are always outliers, so it might be
> > worthwhile to make an i_version increment wait on that if necessary.
>
> Nothing should be able to query i_version until the filesystem is
> fully recovered, mounted and the salt has been set. Hence no
> application (kernel or userspace) should ever see an unsalted
> i_version value....
>
> -Dave.
> --
> Dave Chinner
> david@fromorbit.com
>
next prev parent reply other threads:[~2022-09-13 1:49 UTC|newest]
Thread overview: 126+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-07 11:16 [man-pages RFC PATCH v4] statx, inode: document the new STATX_INO_VERSION field Jeff Layton
2022-09-07 11:37 ` NeilBrown
2022-09-07 12:20 ` J. Bruce Fields
2022-09-07 12:58 ` Jeff Layton
2022-09-07 12:47 ` Jeff Layton
2022-09-07 12:52 ` J. Bruce Fields
2022-09-07 13:12 ` Jeff Layton
2022-09-07 13:51 ` Jan Kara
2022-09-07 14:43 ` Jeff Layton
2022-09-08 0:44 ` NeilBrown
2022-09-08 8:33 ` Jan Kara
2022-09-08 15:21 ` Theodore Ts'o
2022-09-08 15:44 ` J. Bruce Fields
2022-09-08 15:44 ` Jeff Layton
2022-09-08 15:56 ` J. Bruce Fields
2022-09-08 16:15 ` Chuck Lever III
2022-09-08 17:40 ` Jeff Layton
2022-09-08 18:22 ` J. Bruce Fields
2022-09-08 19:07 ` Jeff Layton
2022-09-08 23:01 ` NeilBrown
2022-09-08 23:23 ` Jeff Layton
2022-09-08 23:45 ` NeilBrown
2022-09-09 15:45 ` J. Bruce Fields
2022-09-09 16:36 ` Jeff Layton
2022-09-10 14:56 ` J. Bruce Fields
2022-09-12 11:42 ` Jeff Layton
2022-09-12 12:13 ` Florian Weimer
2022-09-12 12:55 ` Jeff Layton
2022-09-12 13:20 ` Florian Weimer
2022-09-12 13:49 ` Jeff Layton
2022-09-12 13:51 ` J. Bruce Fields
2022-09-12 14:02 ` Jeff Layton
2022-09-12 14:47 ` J. Bruce Fields
2022-09-12 14:15 ` Trond Myklebust
2022-09-12 14:50 ` J. Bruce Fields
2022-09-12 14:56 ` Trond Myklebust
2022-09-12 15:32 ` Trond Myklebust
2022-09-12 15:49 ` Jeff Layton
2022-09-12 12:54 ` J. Bruce Fields
2022-09-12 12:59 ` Jeff Layton
2022-09-13 0:29 ` John Stoffel
2022-09-13 0:41 ` Dave Chinner
2022-09-13 1:49 ` NeilBrown [this message]
2022-09-13 2:41 ` Dave Chinner
2022-09-13 3:30 ` NeilBrown
2022-09-13 9:38 ` Theodore Ts'o
2022-09-13 19:02 ` J. Bruce Fields
2022-09-13 23:19 ` NeilBrown
2022-09-14 0:08 ` J. Bruce Fields
2022-09-09 20:34 ` John Stoffel
2022-09-10 22:13 ` NeilBrown
2022-09-12 10:43 ` Jeff Layton
2022-09-12 13:42 ` J. Bruce Fields
2022-09-12 23:14 ` NeilBrown
2022-09-15 14:06 ` J. Bruce Fields
2022-09-15 15:08 ` Trond Myklebust
2022-09-15 16:45 ` Jeff Layton
2022-09-15 17:49 ` Trond Myklebust
2022-09-15 18:11 ` Jeff Layton
2022-09-15 19:03 ` Trond Myklebust
2022-09-15 19:25 ` Jeff Layton
2022-09-15 22:23 ` NeilBrown
2022-09-16 6:54 ` Theodore Ts'o
2022-09-16 11:36 ` Jeff Layton
2022-09-16 15:11 ` Jeff Layton
2022-09-18 23:53 ` Dave Chinner
2022-09-19 13:13 ` Jeff Layton
2022-09-20 0:16 ` Dave Chinner
2022-09-20 10:26 ` Jeff Layton
2022-09-21 0:00 ` Dave Chinner
2022-09-21 10:33 ` Jeff Layton
2022-09-21 21:41 ` Dave Chinner
2022-09-22 10:18 ` Jeff Layton
2022-09-22 20:18 ` Jeff Layton
2022-09-23 9:56 ` Jan Kara
2022-09-23 10:19 ` Jeff Layton
2022-09-23 13:44 ` Trond Myklebust
2022-09-23 13:50 ` Jeff Layton
2022-09-23 14:58 ` Frank Filz
2022-09-26 22:43 ` NeilBrown
2022-09-27 11:14 ` Jeff Layton
2022-09-27 13:18 ` Jeff Layton
2022-09-15 15:41 ` Jeff Layton
2022-09-15 22:42 ` NeilBrown
2022-09-16 11:32 ` Jeff Layton
2022-09-09 12:11 ` Theodore Ts'o
2022-09-09 12:47 ` Jeff Layton
2022-09-09 13:48 ` Theodore Ts'o
2022-09-09 14:43 ` Jeff Layton
2022-09-09 14:58 ` Theodore Ts'o
2022-09-08 22:55 ` NeilBrown
2022-09-08 23:59 ` Trond Myklebust
2022-09-09 0:51 ` NeilBrown
2022-09-09 1:05 ` Trond Myklebust
2022-09-09 1:07 ` NeilBrown
2022-09-09 1:10 ` Trond Myklebust
2022-09-09 2:14 ` Trond Myklebust
2022-09-09 6:41 ` NeilBrown
2022-09-10 12:39 ` Jeff Layton
2022-09-10 22:53 ` NeilBrown
2022-09-12 10:25 ` Jeff Layton
2022-09-12 23:29 ` NeilBrown
2022-09-13 1:15 ` Dave Chinner
2022-09-13 1:41 ` NeilBrown
2022-09-13 19:01 ` Jeff Layton
2022-09-13 23:24 ` NeilBrown
2022-09-14 11:51 ` Jeff Layton
2022-09-14 22:45 ` NeilBrown
2022-09-14 23:02 ` NeilBrown
2022-09-08 22:40 ` NeilBrown
2022-09-07 13:55 ` Trond Myklebust
2022-09-07 14:05 ` Jeff Layton
2022-09-07 15:04 ` Trond Myklebust
2022-09-07 15:11 ` Jeff Layton
2022-09-08 0:40 ` NeilBrown
2022-09-08 11:34 ` Jeff Layton
2022-09-08 22:29 ` NeilBrown
2022-09-09 11:53 ` Jeff Layton
2022-09-10 22:58 ` NeilBrown
2022-09-10 19:46 ` Al Viro
2022-09-10 23:00 ` NeilBrown
2022-09-08 0:31 ` NeilBrown
2022-09-08 0:41 ` Trond Myklebust
2022-09-08 0:53 ` NeilBrown
2022-09-08 11:37 ` Jeff Layton
2022-09-08 12:40 ` Trond Myklebust
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=166303374350.30452.17386582960615006566@noble.neil.brown.name \
--to=neilb@suse.de \
--cc=adilger.kernel@dilger.ca \
--cc=bfields@fieldses.org \
--cc=brauner@kernel.org \
--cc=ceph-devel@vger.kernel.org \
--cc=chuck.lever@oracle.com \
--cc=david@fromorbit.com \
--cc=djwong@kernel.org \
--cc=fweimer@redhat.com \
--cc=jack@suse.cz \
--cc=jlayton@kernel.org \
--cc=lczerner@redhat.com \
--cc=linux-api@vger.kernel.org \
--cc=linux-btrfs@vger.kernel.org \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-man@vger.kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=trondmy@hammerspace.com \
--cc=tytso@mit.edu \
--cc=viro@zeniv.linux.org.uk \
--cc=xiubli@redhat.com \
--cc=zohar@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).