All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Darrick J. Wong" <djwong@kernel.org>
To: Brian Foster <bfoster@redhat.com>
Cc: Dave Chinner <david@fromorbit.com>, linux-xfs@vger.kernel.org
Subject: Re: [PATCH 3/3] xfs: don't let background reclaim forget sick inodes
Date: Thu, 3 Jun 2021 14:30:34 -0700	[thread overview]
Message-ID: <20210603213034.GE26380@locust> (raw)
In-Reply-To: <YLjLtfDLw89A0gbS@bfoster>

On Thu, Jun 03, 2021 at 08:31:49AM -0400, Brian Foster wrote:
> On Thu, Jun 03, 2021 at 02:42:42PM +1000, Dave Chinner wrote:
> > On Wed, Jun 02, 2021 at 08:12:52PM -0700, Darrick J. Wong wrote:
> > > From: Darrick J. Wong <djwong@kernel.org>
> > > 
> > > It's important that the filesystem retain its memory of sick inodes for
> > > a little while after problems are found so that reports can be collected
> > > about what was wrong.  Don't let background inode reclamation free sick
> > > inodes unless we're under memory pressure.
> > > 
> > > Signed-off-by: Darrick J. Wong <djwong@kernel.org>
> > > ---
> > >  fs/xfs/xfs_icache.c |   21 +++++++++++++++++----
> > >  1 file changed, 17 insertions(+), 4 deletions(-)
> > > 
> > > 
> > > diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> > > index 0e2b6c05e604..54285d1ad574 100644
> > > --- a/fs/xfs/xfs_icache.c
> > > +++ b/fs/xfs/xfs_icache.c
> > > @@ -911,7 +911,8 @@ xfs_dqrele_all_inodes(
> > >   */
> > >  static bool
> > >  xfs_reclaim_igrab(
> > > -	struct xfs_inode	*ip)
> > > +	struct xfs_inode	*ip,
> > > +	struct xfs_eofblocks	*eofb)
> > >  {
> > >  	ASSERT(rcu_read_lock_held());
> > >  
> > > @@ -922,6 +923,17 @@ xfs_reclaim_igrab(
> > >  		spin_unlock(&ip->i_flags_lock);
> > >  		return false;
> > >  	}
> > > +
> > > +	/*
> > > +	 * Don't reclaim a sick inode unless we're under memory pressure or the
> > > +	 * filesystem is unmounting.
> > > +	 */
> > > +	if (ip->i_sick && eofb == NULL &&
> > > +	    !(ip->i_mount->m_flags & XFS_MOUNT_UNMOUNTING)) {
> > > +		spin_unlock(&ip->i_flags_lock);
> > > +		return false;
> > > +	}
> > 
> > Using the "eofb == NULL" as a proxy for being under memory pressure
> > is ... a bit obtuse. If we've got a handful of sick inodes, then
> > there is no problem with just leaving the in memory regardless of
> > memory pressure. If we've got lots of sick inodes, we're likely to
> > end up in a shutdown state or be unmounted for checking real soon.
> > 
> 
> Agreed.. it would be nice to see more explicit logic here. Using the
> existence or not of an optional parameter meant to provide various
> controls is quite fragile.

Ok, I'll add a new private icwalk flag for reclaim callers to indicate
that it's ok to reclaim sick inodes:

	/* Don't reclaim a sick inode unless the caller asked for it. */
	if (ip->i_sick && icw &&
	    (icw->icw_flags & XFS_ICWALK_FLAG_RECLAIM_SICK)) {
		spin_unlock(&ip->i_flags_lock);
		return false;
	}

And then xfs_reclaim_inodes becomes:

void
xfs_reclaim_inodes(
	struct xfs_mount	*mp)
{
	struct xfs_icwalk	icw = {
		.icw_flags	= 0,
	};

	if (xfs_want_reclaim_sick(mp))
		icw.icw_flags |= XFS_ICWALK_FLAG_RECLAIM_SICK;

	while (radix_tree_tagged(&mp->m_perag_tree, XFS_ICI_RECLAIM_TAG)) {
		xfs_ail_push_all_sync(mp->m_ail);
		xfs_icwalk(mp, XFS_ICWALK_RECLAIM, &icw);
	}
}

Similar changes apply to xfs_reclaim_inodes_nr.

> > I'd just leave sick inodes around until unmount or shutdown occurs;
> > lots of sick inodes means repair is necessary right now, so
> > shutdown+unmount is the right solution here, not memory reclaim....
> > 
> 
> That seems like a dependency on a loose correlation and rather
> dangerous.. we're either assuming action on behalf of a user before the
> built up state becomes a broader problem for the system or that somehow
> a cascade of in-core inode problems is going to lead to a shutdown. I
> don't think that is a guarantee, or even necessarily likely. I think if
> we were to do something like pin sick inodes in memory indefinitely, as
> you've pointed out in the past for other such things, we should at least
> consider breakdown conditions and potential for unbound behavior.

Yes.  The subsequent health reporting patchset that I linked a few
responses ago is intended to help with the pinning behavior.  With it,
we'll be able to save the fact that inodes within a given AG were sick
even if we're forced to give back the memory.  At a later time, the
sysadmin can download the health report and initiate a scan that will
recover the specific sick info and schedule downtime or perform repairs.

https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=indirect-health-reporting

The trouble is, this is a user ABI change, so I'm trying to keep it out
of the landing path of deferred inactivation.

> IOW, if scrub decides it wants to pin sick inodes until shutdown, it
> should probably implement some kind of worst case threshold where it
> actually initiates shutdown based on broad health state.

It already can.  As an example, "xfs_scrub -a 1000 -e shutdown" will
shut down the filesystem after 1,000 errors.

> If we can't
> reasonably define something like that, then to me that is a pretty clear
> indication that an indefinite pinning strategy is probably too fragile.

This might be the case anyway.  Remember how I was working on some code
to set sick flags any time we saw a corruption anywhere in the
filesystem?  If that ever gets fully implemented, we could very well end
up pinning tons of memory that will cause the system (probably) to swap
itself to death because we won't let go of the bad inodes.

> OTOH, perhaps scrub has enough knowledge to implement some kind of
> policy where a sick object is pinned until we know the state has been
> queried at least once, then reclaim can have it? I guess we still may
> want to be careful about things like how many sick objects a single
> scrub scan can produce before there's an opportunity for userspace to
> query status; it's not clear to me how much of an issue that might be..
> 
> In any event, this all seems moderately more involved to get right vs
> what the current patch proposes. I think this patch is a reasonable step
> if we can clean up the logic a bit. Perhaps define a flag that contexts
> can use to explicitly reclaim or skip unhealthy inodes?

Done.

--D

> 
> Brian
> 
> > Cheers,
> > 
> > Dave.
> > -- 
> > Dave Chinner
> > david@fromorbit.com
> > 
> 

      reply	other threads:[~2021-06-03 21:30 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-03  3:12 [PATCHSET v2 0/3] xfs: preserve inode health reports for longer Darrick J. Wong
2021-06-03  3:12 ` [PATCH 1/3] xfs: only reset incore inode health state flags when reclaiming an inode Darrick J. Wong
2021-06-03  4:21   ` Dave Chinner
2021-06-03 20:41     ` Darrick J. Wong
2021-06-03 12:22   ` Brian Foster
2021-06-03 20:41     ` Darrick J. Wong
2021-06-03  3:12 ` [PATCH 2/3] xfs: drop IDONTCACHE on inodes when we mark them sick Darrick J. Wong
2021-06-03  4:34   ` Dave Chinner
2021-06-03 20:49     ` Darrick J. Wong
2021-06-03 12:23   ` Brian Foster
2021-06-03 20:48     ` Darrick J. Wong
2021-06-03  3:12 ` [PATCH 3/3] xfs: don't let background reclaim forget sick inodes Darrick J. Wong
2021-06-03  4:42   ` Dave Chinner
2021-06-03 12:31     ` Brian Foster
2021-06-03 21:30       ` Darrick J. Wong [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210603213034.GE26380@locust \
    --to=djwong@kernel.org \
    --cc=bfoster@redhat.com \
    --cc=david@fromorbit.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.