linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: "Darrick J. Wong" <djwong@kernel.org>
Cc: Dave Chinner <dchinner@redhat.com>,
	linux-xfs@vger.kernel.org, hch@infradead.org
Subject: Re: [PATCH 05/14] xfs: per-cpu deferred inode inactivation queues
Date: Thu, 5 Aug 2021 16:43:24 +1000	[thread overview]
Message-ID: <20210805064324.GE2757197@dread.disaster.area> (raw)
In-Reply-To: <162812921040.2589546.137433781469727121.stgit@magnolia>

On Wed, Aug 04, 2021 at 07:06:50PM -0700, Darrick J. Wong wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> Move inode inactivation to background work contexts so that it no
> longer runs in the context that releases the final reference to an
> inode. This will allow process work that ends up blocking on
> inactivation to continue doing work while the filesytem processes
> the inactivation in the background.
> 
> A typical demonstration of this is unlinking an inode with lots of
> extents. The extents are removed during inactivation, so this blocks
> the process that unlinked the inode from the directory structure. By
> moving the inactivation to the background process, the userspace
> applicaiton can keep working (e.g. unlinking the next inode in the
> directory) while the inactivation work on the previous inode is
> done by a different CPU.
> 
> The implementation of the queue is relatively simple. We use a
> per-cpu lockless linked list (llist) to queue inodes for
> inactivation without requiring serialisation mechanisms, and a work
> item to allow the queue to be processed by a CPU bound worker
> thread. We also keep a count of the queue depth so that we can
> trigger work after a number of deferred inactivations have been
> queued.
> 
> The use of a bound workqueue with a single work depth allows the
> workqueue to run one work item per CPU. We queue the work item on
> the CPU we are currently running on, and so this essentially gives
> us affine per-cpu worker threads for the per-cpu queues. THis
> maintains the effective CPU affinity that occurs within XFS at the
> AG level due to all objects in a directory being local to an AG.
> Hence inactivation work tends to run on the same CPU that last
> accessed all the objects that inactivation accesses and this
> maintains hot CPU caches for unlink workloads.
> 
> A depth of 32 inodes was chosen to match the number of inodes in an
> inode cluster buffer. This hopefully allows sequential
> allocation/unlink behaviours to defering inactivation of all the
> inodes in a single cluster buffer at a time, further helping
> maintain hot CPU and buffer cache accesses while running
> inactivations.
> 
> A hard per-cpu queue throttle of 256 inode has been set to avoid
> runaway queuing when inodes that take a long to time inactivate are
> being processed. For example, when unlinking inodes with large
> numbers of extents that can take a lot of processing to free.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> [djwong: tweak comments and tracepoints, convert opflags to state bits]
> Reviewed-by: Darrick J. Wong <djwong@kernel.org>
> Signed-off-by: Darrick J. Wong <djwong@kernel.org>
.....
> @@ -740,6 +751,25 @@ xfs_fs_sync_fs(
>  		flush_delayed_work(&mp->m_log->l_work);
>  	}
>  
> +	/*
> +	 * Flush all deferred inode inactivation work so that the free space
> +	 * counters will reflect recent deletions.  Do not force the log again
> +	 * because log recovery can restart the inactivation from the info that
> +	 * we just wrote into the ondisk log.
> +	 *
> +	 * For regular operation this isn't strictly necessary since we aren't
> +	 * required to guarantee that unlinking frees space immediately, but
> +	 * that is how XFS historically behaved.
> +	 *
> +	 * If, however, the filesystem is at FREEZE_PAGEFAULTS, this is our
> +	 * last chance to complete the inactivation work before the filesystem
> +	 * freezes and the log is quiesced.  The background worker will not
> +	 * activate again until the fs is thawed because the VFS won't evict
> +	 * any more inodes until freeze_super drops s_umount and we disable the
> +	 * worker in xfs_fs_freeze.
> +	 */
> +	xfs_inodegc_flush(mp);
> +
>  	return 0;
>  }
>  
> @@ -854,6 +884,17 @@ xfs_fs_freeze(
>  	 */
>  	flags = memalloc_nofs_save();
>  	xfs_blockgc_stop(mp);
> +
> +	/*
> +	 * Stop the inodegc background worker.  freeze_super already flushed
> +	 * all pending inodegc work when it sync'd the filesystem after setting
> +	 * SB_FREEZE_PAGEFAULTS, and it holds s_umount, so we know that inodes
> +	 * cannot enter xfs_fs_destroy_inode until the freeze is complete.
> +	 * If the filesystem is read-write, inactivated inodes will queue but
> +	 * the worker will not run until the filesystem thaws or unmounts.
> +	 */
> +	xfs_inodegc_stop(mp);
> +
>  	xfs_save_resvblks(mp);
>  	ret = xfs_log_quiesce(mp);
>  	memalloc_nofs_restore(flags);

I still think this freeze handling is problematic. While I can't easily trigger
the problem I saw, I still don't really see what makes the flush in
xfs_fs_sync_fs() prevent races with the final stage of freeze before
inactivation is stopped......

.... and ....

as I write this the xfs/517 loop goes boom on my pmem test setup (but no DAX):

SECTION       -- xfs
FSTYP         -- xfs (debug)
PLATFORM      -- Linux/x86_64 test3 5.14.0-rc4-dgc #506 SMP PREEMPT Thu Aug 5 15:49:49 AEST 2021
MKFS_OPTIONS  -- -f -m rmapbt=1 /dev/pmem1
MOUNT_OPTIONS -- -o dax=never -o context=system_u:object_r:root_t:s0 /dev/pmem1 /mnt/scratch

generic/390 3s ...  3s
xfs/517 43s ... 
Message from syslogd@test3 at Aug  5 15:56:24 ...
kernel:[  162.849634] XFS: Assertion failed: mp->m_super->s_writers.frozen < SB_FREEZE_FS, file: fs/xfs/xfs_icache.c, line: 1889

I suspect that we could actually target this better and close the
race by doing something like:

xfs_fs_sync_fs()
{
	....

	/*
	 * If we are called with page faults frozen out, it means we are about
	 * to freeze the transaction subsystem. Take the opportunity to shut
	 * down inodegc because once SB_FREEZE_FS is set it's too late to
	 * prevent inactivation races with freeze. The fs doesn't get called
	 * again by the freezing process until after SB_FREEZE_FS has been set,
	 * so it's now or never.
	 *
	 * We don't care if this is a normal syncfs call that does this or
	 * freeze that does this - we can run this multiple times without issue
	 * and we won't race with a restart because a restart can only occur when
	 * the state is either SB_FREEZE_FS or SB_FREEZE_COMPLETE.
	 */
	if (sb->s_writers.frozen == SB_FREEZE_PAGEFAULT)
		xfs_inodegc_stop(mp);
}

xfs_fs_freeze()
{
.....
error:
	/*
	 * We need to restart the inodegc on error because we stopped it at
	 * SB_FREEZE_PAGEFAULT level and a thaw is not going to be run to
	 * restart it now. We are at SB_FREEZE_FS level here, so we can restart
	 * safely without racing with a stop in xfs_fs_sync_fs().
	 */
	if (error)
		xfs_inodegc_start(mp);
	return error:
}

The stop and "restart on error" are done under the same s_umount hold, so they
are atomic w.r.t. to other freeze operations so doesn't have the problems with
nested freeze/thaw (g/390) that my last patch had.

Your thoughts?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2021-08-05  6:43 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-05  2:06 [PATCHSET v9 00/14] xfs: deferred inode inactivation Darrick J. Wong
2021-08-05  2:06 ` [PATCH 01/14] xfs: introduce CPU hotplug infrastructure Darrick J. Wong
2021-08-05  2:06 ` [PATCH 02/14] xfs: introduce all-mounts list for cpu hotplug notifications Darrick J. Wong
2021-08-05  2:06 ` [PATCH 03/14] xfs: move xfs_inactive call to xfs_inode_mark_reclaimable Darrick J. Wong
2021-08-05  5:29   ` Dave Chinner
2021-08-05  2:06 ` [PATCH 04/14] xfs: detach dquots from inode if we don't need to inactivate it Darrick J. Wong
2021-08-05  5:30   ` Dave Chinner
2021-08-05  2:06 ` [PATCH 05/14] xfs: per-cpu deferred inode inactivation queues Darrick J. Wong
2021-08-05  6:43   ` Dave Chinner [this message]
2021-08-05  7:00     ` Darrick J. Wong
2021-08-05 22:15       ` Dave Chinner
2021-08-05 22:38         ` Darrick J. Wong
2021-08-07  0:21   ` Darrick J. Wong
2021-08-07 21:49     ` Dave Chinner
2021-08-09 23:36       ` Darrick J. Wong
2021-08-05  2:06 ` [PATCH 06/14] xfs: queue inactivation immediately when free space is tight Darrick J. Wong
2021-08-05  5:31   ` Dave Chinner
2021-08-05  2:07 ` [PATCH 07/14] xfs: queue inactivation immediately when quota is nearing enforcement Darrick J. Wong
2021-08-05  5:35   ` Dave Chinner
2021-08-05  2:07 ` [PATCH 08/14] xfs: queue inactivation immediately when free realtime extents are tight Darrick J. Wong
2021-08-05  5:36   ` Dave Chinner
2021-08-05  2:07 ` [PATCH 09/14] xfs: inactivate inodes any time we try to free speculative preallocations Darrick J. Wong
2021-08-05  5:36   ` Dave Chinner
2021-08-05  2:07 ` [PATCH 10/14] xfs: flush inode inactivation work when compiling usage statistics Darrick J. Wong
2021-08-05  5:38   ` Dave Chinner
2021-08-05  2:07 ` [PATCH 11/14] xfs: don't run speculative preallocation gc when fs is frozen Darrick J. Wong
2021-08-05  5:40   ` Dave Chinner
2021-08-05  2:07 ` [PATCH 12/14] xfs: use background worker pool when transactions can't get free space Darrick J. Wong
2021-08-05  5:42   ` Dave Chinner
2021-08-05  2:07 ` [PATCH 13/14] xfs: avoid buffer deadlocks when walking fs inodes Darrick J. Wong
2021-08-05  2:07 ` [PATCH 14/14] xfs: throttle inode inactivation queuing on memory reclaim Darrick J. Wong
2021-08-05  5:44   ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210805064324.GE2757197@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=dchinner@redhat.com \
    --cc=djwong@kernel.org \
    --cc=hch@infradead.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).