linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jeff Layton <jlayton@kernel.org>
To: NeilBrown <neilb@suse.de>
Cc: "J. Bruce Fields" <bfields@fieldses.org>,
	Theodore Ts'o <tytso@mit.edu>, Jan Kara <jack@suse.cz>,
	adilger.kernel@dilger.ca, djwong@kernel.org, david@fromorbit.com,
	trondmy@hammerspace.com, viro@zeniv.linux.org.uk,
	zohar@linux.ibm.com, xiubli@redhat.com, chuck.lever@oracle.com,
	lczerner@redhat.com, brauner@kernel.org, fweimer@redhat.com,
	linux-man@vger.kernel.org, linux-api@vger.kernel.org,
	linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-kernel@vger.kernel.org, ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org, linux-nfs@vger.kernel.org,
	linux-xfs@vger.kernel.org
Subject: Re: [man-pages RFC PATCH v4] statx, inode: document the new STATX_INO_VERSION field
Date: Fri, 16 Sep 2022 07:32:29 -0400	[thread overview]
Message-ID: <d9c065939af2728b1c0768d5ef7526995b634902.camel@kernel.org> (raw)
In-Reply-To: <166328177826.15759.4993896959612969524@noble.neil.brown.name>

On Fri, 2022-09-16 at 08:42 +1000, NeilBrown wrote:
> On Fri, 16 Sep 2022, Jeff Layton wrote:
> > On Thu, 2022-09-15 at 10:06 -0400, J. Bruce Fields wrote:
> > > On Tue, Sep 13, 2022 at 09:14:32AM +1000, NeilBrown wrote:
> > > > On Mon, 12 Sep 2022, J. Bruce Fields wrote:
> > > > > On Sun, Sep 11, 2022 at 08:13:11AM +1000, NeilBrown wrote:
> > > > > > On Fri, 09 Sep 2022, Jeff Layton wrote:
> > > > > > > 
> > > > > > > The machine crashes and comes back up, and we get a query for i_version
> > > > > > > and it comes back as X. Fine, it's an old version. Now there is a write.
> > > > > > > What do we do to ensure that the new value doesn't collide with X+1? 
> > > > > > 
> > > > > > (I missed this bit in my earlier reply..)
> > > > > > 
> > > > > > How is it "Fine" to see an old version?
> > > > > > The file could have changed without the version changing.
> > > > > > And I thought one of the goals of the crash-count was to be able to
> > > > > > provide a monotonic change id.
> > > > > 
> > > > > I was still mainly thinking about how to provide reliable close-to-open
> > > > > semantics between NFS clients.  In the case the writer was an NFS
> > > > > client, it wasn't done writing (or it would have COMMITted), so those
> > > > > writes will come in and bump the change attribute soon, and as long as
> > > > > we avoid the small chance of reusing an old change attribute, we're OK,
> > > > > and I think it'd even still be OK to advertise
> > > > > CHANGE_TYPE_IS_MONOTONIC_INCR.
> > > > 
> > > > You seem to be assuming that the client doesn't crash at the same time
> > > > as the server (maybe they are both VMs on a host that lost power...)
> > > > 
> > > > If client A reads and caches, client B writes, the server crashes after
> > > > writing some data (to already allocated space so no inode update needed)
> > > > but before writing the new i_version, then client B crashes.
> > > > When server comes back the i_version will be unchanged but the data has
> > > > changed.  Client A will cache old data indefinitely...
> > > 
> > > I guess I assume that if all we're promising is close-to-open, then a
> > > client isn't allowed to trust its cache in that situation.  Maybe that's
> > > an overly draconian interpretation of close-to-open.
> > > 
> > > Also, I'm trying to think about how to improve things incrementally.
> > > Incorporating something like a crash count into the on-disk i_version
> > > fixes some cases without introducing any new ones or regressing
> > > performance after a crash.
> > > 
> > 
> > I think we ought to start there.
> > 
> > > If we subsequently wanted to close those remaining holes, I think we'd
> > > need the change attribute increment to be seen as atomic with respect to
> > > its associated change, both to clients and (separately) on disk.  (That
> > > would still allow the change attribute to go backwards after a crash, to
> > > the value it held as of the on-disk state of the file.  I think clients
> > > should be able to deal with that case.)
> > > 
> > > But, I don't know, maybe a bigger hammer would be OK:
> > > 
> > > > I think we need to require the filesystem to ensure that the i_version
> > > > is seen to increase shortly after any change becomes visible in the
> > > > file, and no later than the moment when the request that initiated the
> > > > change is acknowledged as being complete.  In the case of an unclean
> > > > restart, any file that is not known to have been unchanged immediately
> > > > before the crash must have i_version increased.
> > > > 
> > > > The simplest implementation is to have an unclean-restart counter and to
> > > > always included this multiplied by some constant X in the reported
> > > > i_version.  The filesystem guarantees to record (e.g.  to journal
> > > > at least) the i_version if it comes close to X more than the previous
> > > > record.  The filesystem gets to choose X.
> > > 
> > > So the question is whether people can live with invalidating all client
> > > caches after a cache.  I don't know.
> > > 
> > 
> > I assume you mean "after a crash". Yeah, that is pretty nasty. We don't
> > get perfect crash resilience with incorporating this into the on-disk
> > value, but I like that better than factoring it in at presentation time.
> > 
> > That would mean that the servers would end up getting hammered with read
> > activity after a crash (at least in some environments). I don't think
> > that would be worth the tradeoff. There's a real benefit to preserving
> > caches when we can.
> 
> Would it really mean the server gets hammered?
> 

Traditionally, yes. That was the rationale for fscache, after all.
Particularly in large renderfarms, when rebooting a large swath of
client machines, they end up with blank caches and when they come up
they hammer the server with READs.

We'll be back to that behavior after a crash with this scheme, since
fscache uses the change attribute to determine cache validity. I guess
that's unavoidable for now.

> For files and NFSv4, any significant cache should be held on the basis
> of a delegation, and if the client holds a delegation then it shouldn't
> be paying attention to i_version.
> 
> I'm not entirely sure of this.  Section 10.2.1 of RFC 5661 seems to
> suggest that when the client uses CLAIM_DELEG_PREV to reclaim a
> delegation, it must then return the delegation.  However the explanation
> seems to be mostly about WRITE delegations and immediately flushing
> cached changes.  Do we know if there is a way for the server to say "OK,
> you have that delegation again" in a way that the client can keep the
> delegation and continue to ignore i_version?
> 

Delegations may change that calculus. In general I've noticed that the
client tends to ignore attribute cache changes when it has a delegation.

> For directories, which cannot be delegated the same way but can still be
> cached, the issues are different.  All directory morphing operations
> will be journalled by the filesystem so it should be able to keep the
> i_version up to date.  So the (journalling) filesystem should *NOT* add
> a crash-count to the i_version for directories even if it does for files.
> 

Interesting and good point. We should be able to make that distinction
and just mix in the crash counter for regular files.

> 
> 
> > 
> > > > A more complex solution would be to record (similar to the way orphans
> > > > are recorded) any file which is open for write, and to add X to the
> > > > i_version for any "dirty" file still recorded during an unclean
> > > > restart.  This would avoid bumping the i_version for read-only files.
> > > 
> > > Is that practical?  Working out the performance tradeoffs sounds like a
> > > project.
> > > 
> > > 
> > > > There may be other solutions, but we should leave that up to the
> > > > filesystem.  Each filesystem might choose something different.
> > > 
> > > Sure.
> > > 
> > 
> > Agreed here too. I think we need to allow for some flexibility here. 
> > 
> > Here's what I'm thinking:
> > 
> > We'll carve out the upper 16 bits in the i_version counter to be the
> > crash counter field. That gives us 8k crashes before we have to worry
> > about collisions. Hopefully the remaining 47 bits of counter will be
> > plenty given that we don't increment it when it's not being queried or
> > nothing else changes. (Can we mitigate wrapping here somehow?)
> > 
> > The easiest way to do this would be to add a u16 s_crash_counter to
> > struct super_block. We'd initialize that to 0, and the filesystem could
> > fill that value out at mount time.
> > 
> > Then inode_maybe_inc_iversion can just shift the s_crash_counter that
> > left by 24 bits and and plop it into the top of the value we're
> > preparing to cmpxchg into place.
> > 
> > This is backward compatible too, at least for i_version counter values
> > that are <2^47. With anything larger, we might end up with something
> > going backward and a possible collision, but it's (hopefully) a small
> > risk.
> > 
> > -- 
> > Jeff Layton <jlayton@kernel.org>
> > 

-- 
Jeff Layton <jlayton@kernel.org>

  reply	other threads:[~2022-09-16 11:32 UTC|newest]

Thread overview: 126+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-07 11:16 [man-pages RFC PATCH v4] statx, inode: document the new STATX_INO_VERSION field Jeff Layton
2022-09-07 11:37 ` NeilBrown
2022-09-07 12:20   ` J. Bruce Fields
2022-09-07 12:58     ` Jeff Layton
2022-09-07 12:47   ` Jeff Layton
2022-09-07 12:52     ` J. Bruce Fields
2022-09-07 13:12       ` Jeff Layton
2022-09-07 13:51         ` Jan Kara
2022-09-07 14:43           ` Jeff Layton
2022-09-08  0:44           ` NeilBrown
2022-09-08  8:33             ` Jan Kara
2022-09-08 15:21               ` Theodore Ts'o
2022-09-08 15:44                 ` J. Bruce Fields
2022-09-08 15:44                 ` Jeff Layton
2022-09-08 15:56                   ` J. Bruce Fields
2022-09-08 16:15                     ` Chuck Lever III
2022-09-08 17:40                     ` Jeff Layton
2022-09-08 18:22                       ` J. Bruce Fields
2022-09-08 19:07                         ` Jeff Layton
2022-09-08 23:01                           ` NeilBrown
2022-09-08 23:23                             ` Jeff Layton
2022-09-08 23:45                               ` NeilBrown
2022-09-09 15:45                           ` J. Bruce Fields
2022-09-09 16:36                             ` Jeff Layton
2022-09-10 14:56                               ` J. Bruce Fields
2022-09-12 11:42                                 ` Jeff Layton
2022-09-12 12:13                                   ` Florian Weimer
2022-09-12 12:55                                     ` Jeff Layton
2022-09-12 13:20                                       ` Florian Weimer
2022-09-12 13:49                                         ` Jeff Layton
2022-09-12 13:51                                       ` J. Bruce Fields
2022-09-12 14:02                                         ` Jeff Layton
2022-09-12 14:47                                           ` J. Bruce Fields
2022-09-12 14:15                                         ` Trond Myklebust
2022-09-12 14:50                                           ` J. Bruce Fields
2022-09-12 14:56                                             ` Trond Myklebust
2022-09-12 15:32                                               ` Trond Myklebust
2022-09-12 15:49                                                 ` Jeff Layton
2022-09-12 12:54                                   ` J. Bruce Fields
2022-09-12 12:59                                     ` Jeff Layton
2022-09-13  0:29                                   ` John Stoffel
2022-09-13  0:41                                   ` Dave Chinner
2022-09-13  1:49                                     ` NeilBrown
2022-09-13  2:41                                       ` Dave Chinner
2022-09-13  3:30                                         ` NeilBrown
2022-09-13  9:38                                           ` Theodore Ts'o
2022-09-13 19:02                                       ` J. Bruce Fields
2022-09-13 23:19                                         ` NeilBrown
2022-09-14  0:08                                           ` J. Bruce Fields
2022-09-09 20:34                           ` John Stoffel
2022-09-10 22:13                           ` NeilBrown
2022-09-12 10:43                             ` Jeff Layton
2022-09-12 13:42                             ` J. Bruce Fields
2022-09-12 23:14                               ` NeilBrown
2022-09-15 14:06                                 ` J. Bruce Fields
2022-09-15 15:08                                   ` Trond Myklebust
2022-09-15 16:45                                     ` Jeff Layton
2022-09-15 17:49                                       ` Trond Myklebust
2022-09-15 18:11                                         ` Jeff Layton
2022-09-15 19:03                                           ` Trond Myklebust
2022-09-15 19:25                                             ` Jeff Layton
2022-09-15 22:23                                               ` NeilBrown
2022-09-16  6:54                                                 ` Theodore Ts'o
2022-09-16 11:36                                                   ` Jeff Layton
2022-09-16 15:11                                                     ` Jeff Layton
2022-09-18 23:53                                                       ` Dave Chinner
2022-09-19 13:13                                                         ` Jeff Layton
2022-09-20  0:16                                                           ` Dave Chinner
2022-09-20 10:26                                                             ` Jeff Layton
2022-09-21  0:00                                                               ` Dave Chinner
2022-09-21 10:33                                                                 ` Jeff Layton
2022-09-21 21:41                                                                   ` Dave Chinner
2022-09-22 10:18                                                                     ` Jeff Layton
2022-09-22 20:18                                                                       ` Jeff Layton
2022-09-23  9:56                                                                         ` Jan Kara
2022-09-23 10:19                                                                           ` Jeff Layton
2022-09-23 13:44                                                                           ` Trond Myklebust
2022-09-23 13:50                                                                             ` Jeff Layton
2022-09-23 14:58                                                                               ` Frank Filz
2022-09-26 22:43                                                                               ` NeilBrown
2022-09-27 11:14                                                                                 ` Jeff Layton
2022-09-27 13:18                                                                                 ` Jeff Layton
2022-09-15 15:41                                   ` Jeff Layton
2022-09-15 22:42                                     ` NeilBrown
2022-09-16 11:32                                       ` Jeff Layton [this message]
2022-09-09 12:11                       ` Theodore Ts'o
2022-09-09 12:47                         ` Jeff Layton
2022-09-09 13:48                           ` Theodore Ts'o
2022-09-09 14:43                             ` Jeff Layton
2022-09-09 14:58                               ` Theodore Ts'o
2022-09-08 22:55                   ` NeilBrown
2022-09-08 23:59                     ` Trond Myklebust
2022-09-09  0:51                       ` NeilBrown
2022-09-09  1:05                         ` Trond Myklebust
2022-09-09  1:07                         ` NeilBrown
2022-09-09  1:10                           ` Trond Myklebust
2022-09-09  2:14                             ` Trond Myklebust
2022-09-09  6:41                               ` NeilBrown
2022-09-10 12:39                                 ` Jeff Layton
2022-09-10 22:53                                   ` NeilBrown
2022-09-12 10:25                                     ` Jeff Layton
2022-09-12 23:29                                       ` NeilBrown
2022-09-13  1:15                                         ` Dave Chinner
2022-09-13  1:41                                           ` NeilBrown
2022-09-13 19:01                                           ` Jeff Layton
2022-09-13 23:24                                             ` NeilBrown
2022-09-14 11:51                                               ` Jeff Layton
2022-09-14 22:45                                                 ` NeilBrown
2022-09-14 23:02                                                   ` NeilBrown
2022-09-08 22:40                 ` NeilBrown
2022-09-07 13:55         ` Trond Myklebust
2022-09-07 14:05           ` Jeff Layton
2022-09-07 15:04             ` Trond Myklebust
2022-09-07 15:11               ` Jeff Layton
2022-09-08  0:40             ` NeilBrown
2022-09-08 11:34               ` Jeff Layton
2022-09-08 22:29                 ` NeilBrown
2022-09-09 11:53                   ` Jeff Layton
2022-09-10 22:58                     ` NeilBrown
2022-09-10 19:46               ` Al Viro
2022-09-10 23:00                 ` NeilBrown
2022-09-08  0:31           ` NeilBrown
2022-09-08  0:41             ` Trond Myklebust
2022-09-08  0:53               ` NeilBrown
2022-09-08 11:37               ` Jeff Layton
2022-09-08 12:40                 ` Trond Myklebust

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d9c065939af2728b1c0768d5ef7526995b634902.camel@kernel.org \
    --to=jlayton@kernel.org \
    --cc=adilger.kernel@dilger.ca \
    --cc=bfields@fieldses.org \
    --cc=brauner@kernel.org \
    --cc=ceph-devel@vger.kernel.org \
    --cc=chuck.lever@oracle.com \
    --cc=david@fromorbit.com \
    --cc=djwong@kernel.org \
    --cc=fweimer@redhat.com \
    --cc=jack@suse.cz \
    --cc=lczerner@redhat.com \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-man@vger.kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=neilb@suse.de \
    --cc=trondmy@hammerspace.com \
    --cc=tytso@mit.edu \
    --cc=viro@zeniv.linux.org.uk \
    --cc=xiubli@redhat.com \
    --cc=zohar@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).