linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: "Darrick J. Wong" <djwong@kernel.org>
Cc: linux-xfs@vger.kernel.org, hch@infradead.org
Subject: Re: [PATCH 03/20] xfs: defer inode inactivation to a workqueue
Date: Mon, 2 Aug 2021 07:49:10 +1000	[thread overview]
Message-ID: <20210801214910.GC2757197@dread.disaster.area> (raw)
In-Reply-To: <20210731042112.GM3601443@magnolia>

On Fri, Jul 30, 2021 at 09:21:12PM -0700, Darrick J. Wong wrote:
> On Fri, Jul 30, 2021 at 02:24:00PM +1000, Dave Chinner wrote:
> > On Thu, Jul 29, 2021 at 11:44:10AM -0700, Darrick J. Wong wrote:
> > > From: Darrick J. Wong <djwong@kernel.org>
> > > 
> > > Instead of calling xfs_inactive directly from xfs_fs_destroy_inode,
> > > defer the inactivation phase to a separate workqueue.  With this change,
> > > we can speed up directory tree deletions by reducing the duration of
> > > unlink() calls to the directory and unlinked list updates.
> > > 
> > > By moving the inactivation work to the background, we can reduce the
> > > total cost of deleting a lot of files by performing the file deletions
> > > in disk order instead of directory entry order, which can be arbitrary.
> > > 
> > > We introduce two new inode flags -- NEEDS_INACTIVE and INACTIVATING.
> > > The first flag helps our worker find inodes needing inactivation, and
> > > the second flag marks inodes that are in the process of being
> > > inactivated.  A concurrent xfs_iget on the inode can still resurrect the
> > > inode by clearing NEEDS_INACTIVE (or bailing if INACTIVATING is set).
> > > 
> > > Unfortunately, deferring the inactivation has one huge downside --
> > > eventual consistency.  Since all the freeing is deferred to a worker
> > > thread, one can rm a file but the space doesn't come back immediately.
> > > This can cause some odd side effects with quota accounting and statfs,
> > > so we flush inactivation work during syncfs in order to maintain the
> > > existing behaviors, at least for callers that unlink() and sync().
> > > 
> > > For this patch we'll set the delay to zero to mimic the old timing as
> > > much as possible; in the next patch we'll play with different delay
> > > settings.
> > > 
> > > Signed-off-by: Darrick J. Wong <djwong@kernel.org>
> > .....
> > > +
> > > +/* Disable the inode inactivation background worker and wait for it to stop. */
> > > +void
> > > +xfs_inodegc_stop(
> > > +	struct xfs_mount	*mp)
> > > +{
> > > +	if (!test_and_clear_bit(XFS_OPFLAG_INODEGC_RUNNING_BIT, &mp->m_opflags))
> > > +		return;
> > > +
> > > +	cancel_delayed_work_sync(&mp->m_inodegc_work);
> > > +	trace_xfs_inodegc_stop(mp, __return_address);
> > > +}
> > 
> > FWIW, this introduces a new mount field that does the same thing as the
> > m_opstate field I added in my feature flag cleanup series (i.e.
> > atomic operational state changes).  Personally I much prefer my
> > opstate stuff because this is state, not flags, and the namespace is
> > much less verbose...
> 
> Yes, well, is that ready to go?  Like, right /now/?  I already bolted
> the quotaoff scrapping patchset on the front, after reworking the ENOSPC
> retry loops and reworking quota apis before that...

Should be - that's why it's in my patch stack getting tested. But I
wasn't suggesting that you need to put it in first, just trying to
give you the heads up that there's a substantial conflict between
that and this patchset.

> > THere's also conflicts all over the place because of that. All the
> > RO checks are busted,
> 
> Can we focus on /this/ patchset, then?  What specifically is broken
> about the ro checking in it?

Sory, I wasn't particularly clear about that. What I meant was that
stuff like all the new RO and shutdown checks in this patchset don't
give conflicts but instead cause compilation failures. So the merge
isn't just a case of fixing conflicts, the code doesn't compile
(i.e. it is busted) after fixing all the reported merge conflicts.

> And since the shrinkers are always a source of amusement, what /is/ up
> with it?  I don't really like having to feed it magic numbers just to
> get it to do what I want, which is ... let it free some memory in the
> first round, then we'll kick the background workers when the priority
> bumps (er, decreases), and hope that's enough not to OOM the box.

Well, the shrinkers are intended as a one-shot memory pressure
notification as you are trying to use them. They are intended to be
told the amount of work that needs to be done to free memory, and
they they calculate how much of that work should be done based on
it's idea of the current level of memory pressure.

One shot shrinker triggers never tend to work well because they
treat all memory pressure the same - very light memory pressure is
dead with by the same big hammer than deals with OOM levels of
memory pressure.

As it is, I'm more concerned right now with finding out why there's
such large performance regressions in highly concurrent recursive
chmod/unlink workloads. I spend most of friday looking at it trying
to work out what behaviour was causing the regression, but I haven't
isolated it yet. I suspect that it is lockstepping between user
processes and background inactivation for the unlink - I'm seeing
the backlink rhashtable show up in the profiles which indicates the
unlinked list lengths are an issue and we're lockstepping the AGI.
It also may simply be that there is too much parallelism hammering
the transaction subsystem now....

IOWs, I'm basically going to have to pull this apart patch by patch
to tease out where the behaviours go wrong and see if there are ways
to avoid and mitigate those behaviours.  Hence I haven't even got to
the shrinker/oom considerations yet; there's a bigger performance
issue that needs to be understood first. It may be that they are
related, but right now we need to know why recursive chmod is
saw-toothing (it's not a lack of log space!) and concurrent unlinks
throughput has dropped by half...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2021-08-01 21:49 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-29 18:43 [PATCHSET v8 00/20] xfs: deferred inode inactivation Darrick J. Wong
2021-07-29 18:43 ` [PATCH 01/20] xfs: move xfs_inactive call to xfs_inode_mark_reclaimable Darrick J. Wong
2021-07-29 18:44 ` [PATCH 02/20] xfs: detach dquots from inode if we don't need to inactivate it Darrick J. Wong
2021-07-29 18:44 ` [PATCH 03/20] xfs: defer inode inactivation to a workqueue Darrick J. Wong
2021-07-30  4:24   ` Dave Chinner
2021-07-31  4:21     ` Darrick J. Wong
2021-08-01 21:49       ` Dave Chinner [this message]
2021-08-01 23:47         ` Dave Chinner
2021-08-03  8:34   ` [PATCH, alternative] xfs: per-cpu deferred inode inactivation queues Dave Chinner
2021-08-03 20:20     ` Darrick J. Wong
2021-08-04  3:20     ` [PATCH, alternative v2] " Darrick J. Wong
2021-08-04 10:03       ` [PATCH] xfs: inodegc needs to stop before freeze Dave Chinner
2021-08-04 12:37         ` Dave Chinner
2021-08-04 10:46       ` [PATCH] xfs: don't run inodegc flushes when inodegc is not active Dave Chinner
2021-08-04 16:20         ` Darrick J. Wong
2021-08-04 11:09       ` [PATCH, alternative v2] xfs: per-cpu deferred inode inactivation queues Dave Chinner
2021-08-04 15:59         ` Darrick J. Wong
2021-08-04 21:35           ` Dave Chinner
2021-08-04 11:49       ` [PATCH, pre-03/20 #1] xfs: introduce CPU hotplug infrastructure Dave Chinner
2021-08-04 11:50       ` [PATCH, pre-03/20 #2] xfs: introduce all-mounts list for cpu hotplug notifications Dave Chinner
2021-08-04 16:06         ` Darrick J. Wong
2021-08-04 21:17           ` Dave Chinner
2021-08-04 11:52       ` [PATCH, post-03/20 1/1] xfs: hook up inodegc to CPU dead notification Dave Chinner
2021-08-04 16:19         ` Darrick J. Wong
2021-08-04 21:48           ` Dave Chinner
2021-07-29 18:44 ` [PATCH 04/20] xfs: throttle inode inactivation queuing on memory reclaim Darrick J. Wong
2021-07-29 18:44 ` [PATCH 05/20] xfs: don't throttle memory reclaim trying to queue inactive inodes Darrick J. Wong
2021-07-29 18:44 ` [PATCH 06/20] xfs: throttle inodegc queuing on backlog Darrick J. Wong
2021-08-02  0:45   ` Dave Chinner
2021-08-02  1:30     ` Dave Chinner
2021-07-29 18:44 ` [PATCH 07/20] xfs: queue inodegc worker immediately when memory is tight Darrick J. Wong
2021-07-29 18:44 ` [PATCH 08/20] xfs: expose sysfs knob to control inode inactivation delay Darrick J. Wong
2021-07-29 18:44 ` [PATCH 09/20] xfs: reduce inactivation delay when free space is tight Darrick J. Wong
2021-07-29 18:44 ` [PATCH 10/20] xfs: reduce inactivation delay when quota are tight Darrick J. Wong
2021-07-29 18:44 ` [PATCH 11/20] xfs: reduce inactivation delay when realtime extents " Darrick J. Wong
2021-07-29 18:44 ` [PATCH 12/20] xfs: inactivate inodes any time we try to free speculative preallocations Darrick J. Wong
2021-07-29 18:45 ` [PATCH 13/20] xfs: flush inode inactivation work when compiling usage statistics Darrick J. Wong
2021-07-29 18:45 ` [PATCH 14/20] xfs: parallelize inode inactivation Darrick J. Wong
2021-08-02  0:55   ` Dave Chinner
2021-08-02 21:33     ` Darrick J. Wong
2021-07-29 18:45 ` [PATCH 15/20] xfs: reduce inactivation delay when AG free space are tight Darrick J. Wong
2021-07-29 18:45 ` [PATCH 16/20] xfs: queue inodegc worker immediately on backlog Darrick J. Wong
2021-07-29 18:45 ` [PATCH 17/20] xfs: don't run speculative preallocation gc when fs is frozen Darrick J. Wong
2021-07-29 18:45 ` [PATCH 18/20] xfs: scale speculative preallocation gc delay based on free space Darrick J. Wong
2021-07-29 18:45 ` [PATCH 19/20] xfs: use background worker pool when transactions can't get " Darrick J. Wong
2021-07-29 18:45 ` [PATCH 20/20] xfs: avoid buffer deadlocks when walking fs inodes Darrick J. Wong
2021-08-02 10:35 ` [PATCHSET v8 00/20] xfs: deferred inode inactivation Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210801214910.GC2757197@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=djwong@kernel.org \
    --cc=hch@infradead.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).