linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: "Darrick J. Wong" <djwong@kernel.org>
Cc: Dave Chinner <dchinner@redhat.com>,
	linux-xfs@vger.kernel.org, hch@infradead.org
Subject: Re: [PATCH 05/14] xfs: per-cpu deferred inode inactivation queues
Date: Fri, 6 Aug 2021 08:15:02 +1000	[thread overview]
Message-ID: <20210805221502.GF2757197@dread.disaster.area> (raw)
In-Reply-To: <20210805070032.GW3601443@magnolia>

On Thu, Aug 05, 2021 at 12:00:32AM -0700, Darrick J. Wong wrote:
> On Thu, Aug 05, 2021 at 04:43:24PM +1000, Dave Chinner wrote:
> > On Wed, Aug 04, 2021 at 07:06:50PM -0700, Darrick J. Wong wrote:
> > > From: Dave Chinner <dchinner@redhat.com>
> > > 
> > > Move inode inactivation to background work contexts so that it no
> > > longer runs in the context that releases the final reference to an
> > > inode. This will allow process work that ends up blocking on
> > > inactivation to continue doing work while the filesytem processes
> > > the inactivation in the background.
....
> > > @@ -854,6 +884,17 @@ xfs_fs_freeze(
> > >  	 */
> > >  	flags = memalloc_nofs_save();
> > >  	xfs_blockgc_stop(mp);
> > > +
> > > +	/*
> > > +	 * Stop the inodegc background worker.  freeze_super already flushed
> > > +	 * all pending inodegc work when it sync'd the filesystem after setting
> > > +	 * SB_FREEZE_PAGEFAULTS, and it holds s_umount, so we know that inodes
> > > +	 * cannot enter xfs_fs_destroy_inode until the freeze is complete.
> > > +	 * If the filesystem is read-write, inactivated inodes will queue but
> > > +	 * the worker will not run until the filesystem thaws or unmounts.
> > > +	 */
> > > +	xfs_inodegc_stop(mp);
> > > +
> > >  	xfs_save_resvblks(mp);
> > >  	ret = xfs_log_quiesce(mp);
> > >  	memalloc_nofs_restore(flags);
> > 
> > I still think this freeze handling is problematic. While I can't easily trigger
> > the problem I saw, I still don't really see what makes the flush in
> > xfs_fs_sync_fs() prevent races with the final stage of freeze before
> > inactivation is stopped......
> > 
> > .... and ....
> > 
> > as I write this the xfs/517 loop goes boom on my pmem test setup (but no DAX):
> > 
> > SECTION       -- xfs
> > FSTYP         -- xfs (debug)
> > PLATFORM      -- Linux/x86_64 test3 5.14.0-rc4-dgc #506 SMP PREEMPT Thu Aug 5 15:49:49 AEST 2021
> > MKFS_OPTIONS  -- -f -m rmapbt=1 /dev/pmem1
> > MOUNT_OPTIONS -- -o dax=never -o context=system_u:object_r:root_t:s0 /dev/pmem1 /mnt/scratch
> > 
> > generic/390 3s ...  3s
> > xfs/517 43s ... 
> > Message from syslogd@test3 at Aug  5 15:56:24 ...
> > kernel:[  162.849634] XFS: Assertion failed: mp->m_super->s_writers.frozen < SB_FREEZE_FS, file: fs/xfs/xfs_icache.c, line: 1889
> > 
> > I suspect that we could actually target this better and close the
> > race by doing something like:
> > 
> > xfs_fs_sync_fs()
> > {
> > 	....
> > 
> > 	/*
> > 	 * If we are called with page faults frozen out, it means we are about
> > 	 * to freeze the transaction subsystem. Take the opportunity to shut
> > 	 * down inodegc because once SB_FREEZE_FS is set it's too late to
> > 	 * prevent inactivation races with freeze. The fs doesn't get called
> > 	 * again by the freezing process until after SB_FREEZE_FS has been set,
> > 	 * so it's now or never.
> > 	 *
> > 	 * We don't care if this is a normal syncfs call that does this or
> > 	 * freeze that does this - we can run this multiple times without issue
> > 	 * and we won't race with a restart because a restart can only occur when
> > 	 * the state is either SB_FREEZE_FS or SB_FREEZE_COMPLETE.
> > 	 */
> > 	if (sb->s_writers.frozen == SB_FREEZE_PAGEFAULT)
> > 		xfs_inodegc_stop(mp);
> 
> LOL, a previous version of this series actually did this part this way,
> but...
> 
> > }
> > 
> > xfs_fs_freeze()
> > {
> > .....
> > error:
> > 	/*
> > 	 * We need to restart the inodegc on error because we stopped it at
> > 	 * SB_FREEZE_PAGEFAULT level and a thaw is not going to be run to
> > 	 * restart it now. We are at SB_FREEZE_FS level here, so we can restart
> > 	 * safely without racing with a stop in xfs_fs_sync_fs().
> > 	 */
> > 	if (error)
> > 		xfs_inodegc_start(mp);
> 
> ...missed this part.  If this fixes x517 and doesn't break g390 for you,
> I'll meld it into the series.  I think the reasoning here makes sense.

Nope, both x517 and g390 still fire this assert, so there's
something else we're missing here.

I keep wondering if we should be wrapping the entire flush mechanism
in a rwsem - read for flush, write for start/stop - so that we
aren't ever still processing a stop while a concurrent start runs or
vice versa...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2021-08-05 22:15 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-05  2:06 [PATCHSET v9 00/14] xfs: deferred inode inactivation Darrick J. Wong
2021-08-05  2:06 ` [PATCH 01/14] xfs: introduce CPU hotplug infrastructure Darrick J. Wong
2021-08-05  2:06 ` [PATCH 02/14] xfs: introduce all-mounts list for cpu hotplug notifications Darrick J. Wong
2021-08-05  2:06 ` [PATCH 03/14] xfs: move xfs_inactive call to xfs_inode_mark_reclaimable Darrick J. Wong
2021-08-05  5:29   ` Dave Chinner
2021-08-05  2:06 ` [PATCH 04/14] xfs: detach dquots from inode if we don't need to inactivate it Darrick J. Wong
2021-08-05  5:30   ` Dave Chinner
2021-08-05  2:06 ` [PATCH 05/14] xfs: per-cpu deferred inode inactivation queues Darrick J. Wong
2021-08-05  6:43   ` Dave Chinner
2021-08-05  7:00     ` Darrick J. Wong
2021-08-05 22:15       ` Dave Chinner [this message]
2021-08-05 22:38         ` Darrick J. Wong
2021-08-07  0:21   ` Darrick J. Wong
2021-08-07 21:49     ` Dave Chinner
2021-08-09 23:36       ` Darrick J. Wong
2021-08-05  2:06 ` [PATCH 06/14] xfs: queue inactivation immediately when free space is tight Darrick J. Wong
2021-08-05  5:31   ` Dave Chinner
2021-08-05  2:07 ` [PATCH 07/14] xfs: queue inactivation immediately when quota is nearing enforcement Darrick J. Wong
2021-08-05  5:35   ` Dave Chinner
2021-08-05  2:07 ` [PATCH 08/14] xfs: queue inactivation immediately when free realtime extents are tight Darrick J. Wong
2021-08-05  5:36   ` Dave Chinner
2021-08-05  2:07 ` [PATCH 09/14] xfs: inactivate inodes any time we try to free speculative preallocations Darrick J. Wong
2021-08-05  5:36   ` Dave Chinner
2021-08-05  2:07 ` [PATCH 10/14] xfs: flush inode inactivation work when compiling usage statistics Darrick J. Wong
2021-08-05  5:38   ` Dave Chinner
2021-08-05  2:07 ` [PATCH 11/14] xfs: don't run speculative preallocation gc when fs is frozen Darrick J. Wong
2021-08-05  5:40   ` Dave Chinner
2021-08-05  2:07 ` [PATCH 12/14] xfs: use background worker pool when transactions can't get free space Darrick J. Wong
2021-08-05  5:42   ` Dave Chinner
2021-08-05  2:07 ` [PATCH 13/14] xfs: avoid buffer deadlocks when walking fs inodes Darrick J. Wong
2021-08-05  2:07 ` [PATCH 14/14] xfs: throttle inode inactivation queuing on memory reclaim Darrick J. Wong
2021-08-05  5:44   ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210805221502.GF2757197@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=dchinner@redhat.com \
    --cc=djwong@kernel.org \
    --cc=hch@infradead.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).