linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: Ian Kent <raven@themaw.net>
Cc: Al Viro <viro@zeniv.linux.org.uk>,
	"Darrick J. Wong" <djwong@kernel.org>,
	Christoph Hellwig <hch@lst.de>,
	Miklos Szeredi <miklos@szeredi.hu>,
	David Howells <dhowells@redhat.com>,
	Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	xfs <linux-xfs@vger.kernel.org>
Subject: Re: [PATCH] vfs: check dentry is still valid in get_link()
Date: Mon, 17 Jan 2022 09:35:58 -0500	[thread overview]
Message-ID: <YeV+zseKGNqnSuKR@bfoster> (raw)
In-Reply-To: <275358741c4ee64b5e4e008d514876ed4ec1071c.camel@themaw.net>

On Mon, Jan 17, 2022 at 10:55:32AM +0800, Ian Kent wrote:
> On Sat, 2022-01-15 at 06:38 +0000, Al Viro wrote:
> > On Mon, Jan 10, 2022 at 05:11:31PM +0800, Ian Kent wrote:
> > > When following a trailing symlink in rcu-walk mode it's possible
> > > for
> > > the dentry to become invalid between the last dentry seq lock check
> > > and getting the link (eg. an unlink) leading to a backtrace similar
> > > to this:
> > > 
> > > crash> bt
> > > PID: 10964  TASK: ffff951c8aa92f80  CPU: 3   COMMAND: "TaniumCX"
> > > …
> > >  #7 [ffffae44d0a6fbe0] page_fault at ffffffff8d6010fe
> > >     [exception RIP: unknown or invalid address]
> > >     RIP: 0000000000000000  RSP: ffffae44d0a6fc90  RFLAGS: 00010246
> > >     RAX: ffffffff8da3cc80  RBX: ffffae44d0a6fd30  RCX:
> > > 0000000000000000
> > >     RDX: ffffae44d0a6fd98  RSI: ffff951aa9af3008  RDI:
> > > 0000000000000000
> > >     RBP: 0000000000000000   R8: ffffae44d0a6fb94   R9:
> > > 0000000000000000
> > >     R10: ffff951c95d8c318  R11: 0000000000080000  R12:
> > > ffffae44d0a6fd98
> > >     R13: ffff951aa9af3008  R14: ffff951c8c9eb840  R15:
> > > 0000000000000000
> > >     ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
> > >  #8 [ffffae44d0a6fc90] trailing_symlink at ffffffff8cf24e61
> > >  #9 [ffffae44d0a6fcc8] path_lookupat at ffffffff8cf261d1
> > > #10 [ffffae44d0a6fd28] filename_lookup at ffffffff8cf2a700
> > > #11 [ffffae44d0a6fe40] vfs_statx at ffffffff8cf1dbc4
> > > #12 [ffffae44d0a6fe98] __do_sys_newstat at ffffffff8cf1e1f9
> > > #13 [ffffae44d0a6ff38] do_syscall_64 at ffffffff8cc0420b
> > > 
> > > Most of the time this is not a problem because the inode is
> > > unchanged
> > > while the rcu read lock is held.
> > > 
> > > But xfs can re-use inodes which can result in the inode -
> > > >get_link()
> > > method becoming invalid (or NULL).
> > 
> > Without an RCU delay?  Then we have much worse problems...
> 
> Sorry for the delay.
> 
> That was a problem that was discussed at length with the original post
> of this patch that included a patch for this too (misguided though it
> was).
> 

To Al's question, at the end of the day there is no rcu delay involved
with inode reuse in XFS. We do use call_rcu() for eventual freeing of
inodes (see __xfs_inode_free()), but inode reuse occurs for inodes that
have been put into a "reclaim" state before getting to the point of
freeing the struct inode memory. This lead to the long discussion [1]
Ian references around ways to potentially deal with that. I think the
TLDR of that thread is there are various potential options for
improvement, such as to rcu wait on inode creation/reuse (either
explicitly or via more open coded grace period cookie tracking), to rcu
wait somewhere in the destroy sequence before inodes become reuse
candidates, etc., but none of them seemingly agreeable for varying
reasons (IIRC mostly stemming from either performance or compexity) [2].

The change that has been made so far in XFS is to turn rcuwalk for
symlinks off once again, which looks like landed in Linus' tree as
commit 7b7820b83f23 ("xfs: don't expose internal symlink metadata
buffers to the vfs"). The hope is that between that patch and this
prospective vfs tweak, we can have a couple incremental fixes that at
least address the practical problem users have been running into (which
is a crash due to a NULL ->get_link() callback pointer due to inode
reuse). The inode reuse vs. rcu thing might still be a broader problem,
but AFAIA that mechanism has been in place in XFS on Linux pretty much
forever.

Brian

[1] https://lore.kernel.org/linux-fsdevel/163660197073.22525.11235124150551283676.stgit@mickey.themaw.net/

[2] Yet another idea could be a mix of two of the previously discussed
approaches: stamp the current rcu gp marker in the xfs_inode somewhere
on destroy and check it on reuse to conditionally rcu wait when
necessary. Perhaps that might provide enough batching to mitigate
performance impact when compared to an unconditional create side wait.

> That discussion resulted in Darrick merging the problem xfs inline
> symlink handling with the xfs normal symlink handling.
> 
> Another problem with these inline syslinks was they would hand a
> pointer to internal xfs storage to the VFS. Darrick's change
> allocates and copies the link then hands it to the VFS to free
> after use. And since there's an allocation in the symlink handler
> the rcu-walk case returns -ECHILD (on passed NULL dentry) so the
> VFS will call unlazy before that next call which I think is itself
> enough to resolve this problem.
> 
> The only thing I think might be questionable is the VFS copy of the
> inode pointer but I think the inode is rcu freed so it will be
> around and the seq count will have changed so I think it should be
> ok.
> 
> If I'm missing something please say so, ;)
> 
> Darrick's patch is (was last I looked) in his xfs-next tree.
> 
> Ian
> 
> 


  reply	other threads:[~2022-01-17 14:36 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-10  9:11 [PATCH] vfs: check dentry is still valid in get_link() Ian Kent
2022-01-15  6:38 ` Al Viro
2022-01-17  2:55   ` Ian Kent
2022-01-17 14:35     ` Brian Foster [this message]
2022-01-17 16:28       ` Al Viro
2022-01-17 18:10         ` Al Viro
2022-01-17 19:48           ` Al Viro
2022-01-18  1:32             ` Al Viro
2022-01-18  2:31               ` Ian Kent
2022-01-18  3:03                 ` Al Viro
2022-01-18 13:47               ` Brian Foster
2022-01-18 18:25                 ` Brian Foster
2022-01-18 19:20                   ` Al Viro
2022-01-18 20:58                     ` Brian Foster
2022-01-18  8:29           ` Christian Brauner
2022-01-18 16:04             ` Al Viro
2022-01-19  9:05               ` Christian Brauner
2022-01-17 18:42         ` Brian Foster
2022-01-18  3:00         ` Dave Chinner
2022-01-18  3:17           ` Al Viro
2022-01-18  4:12             ` Dave Chinner
2022-01-18  5:58               ` Al Viro
2022-01-18 23:25                 ` Dave Chinner
2022-01-19 14:08                   ` Brian Foster
2022-01-19 22:07                     ` Dave Chinner
2022-01-20 16:03                       ` Brian Foster
2022-01-20 16:34                         ` Brian Foster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YeV+zseKGNqnSuKR@bfoster \
    --to=bfoster@redhat.com \
    --cc=dhowells@redhat.com \
    --cc=djwong@kernel.org \
    --cc=hch@lst.de \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=miklos@szeredi.hu \
    --cc=raven@themaw.net \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).