All of lore.kernel.org
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: Dave Chinner <david@fromorbit.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: [PATCH RFC 3/4] xfs: crude chunk allocation retry mechanism
Date: Mon, 28 Feb 2022 16:45:49 -0500	[thread overview]
Message-ID: <Yh1CjRONamUG0k1C@bfoster> (raw)
In-Reply-To: <20220223070058.GK59715@dread.disaster.area>

On Wed, Feb 23, 2022 at 06:00:58PM +1100, Dave Chinner wrote:
> On Sun, Feb 20, 2022 at 01:48:10PM -0500, Brian Foster wrote:
> > On Sat, Feb 19, 2022 at 09:54:40AM +1100, Dave Chinner wrote:
> > > On Fri, Feb 18, 2022 at 09:21:40AM -0500, Brian Foster wrote:
> > > > The point of background freeing inode chunks was that it makes this
> > > > problem go away because then we ensure that inode chunks aren't freed
> > > > until all associated busy inodes are cleared, and so we preserve the
> > > > historical behavior that an inode chunk allocation guarantees immediate
> > > > ability to allocate an inode. I thought we agreed in the previous
> > > > discussion that this was the right approach since it seemed to be in the
> > > > long term direction for XFS anyways.. hm?
> > > 
> > > Long term, yes, but we need something that works effectively and
> > > efficiently now, with minimal additional overhead, because we're
> > > going to have to live with this code in the allocation fast path for
> > > some time yet.
> > > 
> > 
> > Right, but I thought this is why we were only going to do the background
> > freeing part of the whole "background inode management" thing?
> > 
> > Short of that, I'm not aware of a really great option atm. IMO, pushing
> > explicit busy inode state/checking down into the block allocator is kind
> > of a gross layering violation. The approach this series currently uses
> > is simple and effective, but it's an unbound retry mechanism that just
> > continues to allocate chunks until we get one we can use, which is too
> > crude for production.
> 
> *nod*
> 
> Right, that's why I want to get this whole mechanism completely
> detached from the VFS inode RCU life cycle rules and instead
> synchronised by internal IO operations such as log forces.
> 
> The code I currently have is based on your changes, just without the
> fallback chunk allocation. I'm not really even scanning the irecs;
> I just look at the number of free inodes and count the number of
> busy inodes over the range of the record. If they aren't the same,
> we've got inodes in that record we can allocate from. I only do a
> free inode-by-free inode busy check once the irec we are going to
> allocate from has been selected.
> 
> Essentially, I'm just scanning records in the finobt to find the
> first with a non-busy inode. If we fall off the end of the finobt,
> it issues a log force and kicks the AIL and then retries the
> allocation from the finobt. There isn't any attempt to allocate new
> inode chunks in the case, but it may end up being necessary and can
> be done without falling back to the outer loops.
> 

Sure, that was just done for expediency to test the approach. That said,
as a first step I wouldn't be opposed to something that similarly falls
back to the outer loop so long as it incorporates a check like described
below to restrict the retry attempt(s). The logic is simple enough that
more extensive cleanups could come separately..

> i.e. as long as we track whether we've allocated a new inode chunk
> or not, we can bound the finobt search to a single retry. If we
> allocated a new chunk before entering the finobt search, then all we
> need is a log force because the busy inodes, by definition, are
> XFS_ISTALE at this point and waiting for a CIL push before they can
> be reclaimed. At this point an retry of the finobt scan will find
> those inodes that were busy now available for allocation.
> 

Remind me what aspect of the prospective VFS changes prevents inode
allocation from seeing a free inode with not-yet-elapsed RCU grace
period..? Will this delay freeing (or evicting) the inode until a grace
period elapses from when the last reference was dropped?

> If we haven't allocated a new chunk, then we can do so immediately
> and retry the allocation. If we still find them all busy, we force
> the log...
> 
> IOWs, once we isolate this busy inode tracking from the VFS inode
> RCU requirements, we can bound the allocation behaviour because
> chunk allocation and log forces provide us with a guarantee that the
> newly allocated inode chunks contain inodes that can be immediately
> reallocated without having to care about where the new inode chunk
> is located....
> 
> > Perhaps a simple enough short term option is to use the existing block
> > alloc min/max range mechanisms (as mentioned on IRC). For example:
> > 
> > - Use the existing min/max_agbno allocation arg input values to attempt
> >   one or two chunk allocs outside of the known range of busy inodes for
> >   the AG (i.e., allocate blocks higher than the max busy agino or lower
> >   than the min busy agino).
> > - If success, then we know we've got a chunk w/o busy inodes.
> > - If failure, fall back to the existing chunk alloc calls, take whatever
> >   we get and retry the finobt scan (perhaps more aggressively checking
> >   each record) hoping we got a usable new record.
> > - If that still fails, then fall back to synchronize_rcu() as a last
> >   resort and grab one of the previously busy inodes.
> > 
> > I couldn't say for sure if that would be effective enough without
> > playing with it a bit, but that would sort of emulate an algorithm that
> > filtered chunk block allocations with at least one busy inode without
> > having to modify block allocation code. If it avoids an RCU sync in the
> > majority of cases it might be effective enough as a stopgap until
> > background freeing exists. Thoughts?
> 
> It might work, but I'm not a fan of how many hoops we are considering
> jumping through to avoid getting tangled up in the RCU requirements
> for VFS inode life cycles. I'd much prefer just being able to say
> "all inodes busy, log force, try again" like we do with busy extent
> limited block allocation...
> 

Well presumably that works fine for your implementation of busy inodes,
but not so much for the variant intended to work around the RCU inode
reuse problem. ISTM this all just depends on the goals and plan here,
and I'm not seeing clear enough reasoning to grok what that is. A
summary of the options discussed so far:

- deferred inode freeing - ideal but too involved, want something sooner
  and more simple
- hinted non-busy chunk allocation - simple but jumping through too many
  RCU requirement hoops
- sync RCU and retry - most analogous to longer term busy inode
  allocation handling (i.e. just replace rcu sync w/ log force and
  retry), but RCU sync is too performance costly as a direct fallback

ISTM the options here range from simple and slow, to moderately simple
and effective (TBD), to complex and ideal. So what is the goal for this
series? My understanding to this point is that VFS changes are a ways
out, so the first step was busy inodes == inodes with pending grace
periods and a rework of the busy inode definition comes later with the
associated VFS changes. That essentially means this series gets fixed up
and posted as a mergeable component in the meantime, albeit with a
"general direction" that is compatible with longer term changes.

However, every RCU requirement/characteristic that this series has to
deal with is never going to be 100% perfectly aligned with the longer
term busy inode requirements because the implementations/definitions
will differ, particularly if the RCU handling is pushed up into the VFS.
So if we're concerned about that, the alternative approach is to skip
incrementally addressing the RCU inode reuse problem and just fold
whatever bits of useful logic from here into your inode lifecycle work
as needed and drop this as a mergeable component.

Brian

> That said, the complexity gets moved elsewhere (the VFS inode
> lifecycle management) rather than into the inode allocator, but I
> think that's a valid trade-off because the RCU requirements for
> inode reallocation come from the VFS inode lifecycle constraints.
> 
> > > Really, I want foreground inode allocation to know nothing about
> > > inode chunk allocation. If there are no inodes available for
> > > allocation, it kicks background inode chunk management and sleeps
> > > waiting for to be given an allocated inode it can use. It shouldn't
> > > even have to know about busy inodes - just work from an in-memory
> > > per-ag free list of inode numbers that can be immediately allocated.
> > > 
> > > In this situation, inodes that have been recently unlinked don't
> > > show up on that list until they can be reallocated safely. This
> > > is all managed asynchronously in the background by the inodegc state
> > > machine (what I'm currently working on) and when the inode is
> > > finally reclaimed it is moved into the free list and allowed to be
> > > reallocated.
> > > 
> > 
> > I think that makes a lot of sense. That's quite similar to another
> > experiment I was playing with that essentially populated a capped size
> > pool of background inactivated inodes that the allocation side could
> > pull directly from (i.e., so allocation actually becomes a radix tree
> > lookup instead of a filtered btree scan), but so far I was kind of
> > fighting with the existing mechanisms, trying not to peturb sustained
> > remove performance, etc., and hadn't been able to produce a performance
> > benefit yet. Perhaps this will work out better with the bigger picture
> > changes to inode lifecycle and background inode management in place..
> 
> *nod*
> 
> The "don't have a perf impact" side of thigns is generally why I
> test all my new code with fsmark and other scalability tests before
> I go anywhere near fstests. If it doesn't perform, it's not worth
> trying to make work correctly...
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 


  reply	other threads:[~2022-02-28 21:45 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-17 17:25 [PATCH RFC 0/4] xfs: track and skip realloc of busy inodes Brian Foster
2022-02-17 17:25 ` [PATCH RFC 1/4] xfs: require an rcu grace period before inode recycle Brian Foster
2022-02-17 17:25 ` [PATCH RFC 2/4] xfs: tag reclaimable inodes with pending RCU grace periods as busy Brian Foster
2022-02-17 23:16   ` Dave Chinner
2022-02-18 14:19     ` Brian Foster
2022-02-17 17:25 ` [PATCH RFC 3/4] xfs: crude chunk allocation retry mechanism Brian Foster
2022-02-17 23:20   ` Dave Chinner
2022-02-18 14:21     ` Brian Foster
2022-02-18 22:54       ` Dave Chinner
2022-02-20 18:48         ` Brian Foster
2022-02-23  7:00           ` Dave Chinner
2022-02-28 21:45             ` Brian Foster [this message]
2022-02-28 22:55               ` Dave Chinner
2022-03-01 15:05                 ` Brian Foster
2022-02-17 17:25 ` [PATCH RFC 4/4] xfs: skip busy inodes on finobt inode allocation Brian Foster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Yh1CjRONamUG0k1C@bfoster \
    --to=bfoster@redhat.com \
    --cc=david@fromorbit.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.