All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Darrick J. Wong" <djwong@kernel.org>
To: Brian Foster <bfoster@redhat.com>
Cc: linux-xfs@vger.kernel.org, david@fromorbit.com
Subject: Re: [PATCH 1/3] xfs: only reset incore inode health state flags when reclaiming an inode
Date: Thu, 3 Jun 2021 13:41:57 -0700	[thread overview]
Message-ID: <20210603204157.GZ26380@locust> (raw)
In-Reply-To: <YLjJcro1vhPTfGrv@bfoster>

On Thu, Jun 03, 2021 at 08:22:10AM -0400, Brian Foster wrote:
> On Wed, Jun 02, 2021 at 08:12:41PM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <djwong@kernel.org>
> > 
> > While running some fuzz tests on inode metadata, I noticed that the
> > filesystem health report (as provided by xfs_spaceman) failed to report
> > the file corruption even when spaceman was run immediately after running
> > xfs_scrub to detect the corruption.  That isn't the intended behavior;
> > one ought to be able to run scrub to detect errors in the ondisk
> > metadata and be able to access to those reports for some time after the
> > scrub.
> > 
> > After running the same sequence through an instrumented kernel, I
> > discovered the reason why -- scrub igets the file, scans it, marks it
> > sick, and ireleases the inode.  When the VFS lets go of the incore
> > inode, it moves to RECLAIMABLE state.  If spaceman igets the incore
> > inode before it moves to RECLAIM state, iget reinitializes the VFS
> > state, clears the sick and checked masks, and hands back the inode.  At
> > this point, the caller has the exact same incore inode, but with all the
> > health state erased.
> > 
> > In other words, we're erasing the incore inode's health state flags when
> > we've decided NOT to sever the link between the incore inode and the
> > ondisk inode.  This is wrong, so we need to remove the lines that zero
> > the fields from xfs_iget_cache_hit.
> > 
> > As a precaution, we add the same lines into xfs_reclaim_inode just after
> > we sever the link between incore and ondisk inode.  Strictly speaking
> > this isn't necessary because once an inode has gone through reclaim it
> > must go through xfs_inode_alloc (which also zeroes the state) and
> > xfs_iget is careful to check for mismatches between the inode it pulls
> > out of the radix tree and the one it wants.
> > 
> > Fixes: 6772c1f11206 ("xfs: track metadata health status")
> > Signed-off-by: Darrick J. Wong <djwong@kernel.org>
> > ---
> 
> I think I reviewed this the last time around..

Oops, yes, my bad. :(

--D

> 
> Reviewed-by: Brian Foster <bfoster@redhat.com>
> 
> >  fs/xfs/xfs_icache.c |    5 ++---
> >  1 file changed, 2 insertions(+), 3 deletions(-)
> > 
> > 
> > diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> > index 396cc54ca03f..c3f912a9231b 100644
> > --- a/fs/xfs/xfs_icache.c
> > +++ b/fs/xfs/xfs_icache.c
> > @@ -523,9 +523,6 @@ xfs_iget_cache_hit(
> >  				XFS_INO_TO_AGINO(pag->pag_mount, ino),
> >  				XFS_ICI_RECLAIM_TAG);
> >  		inode->i_state = I_NEW;
> > -		ip->i_sick = 0;
> > -		ip->i_checked = 0;
> > -
> >  		spin_unlock(&ip->i_flags_lock);
> >  		spin_unlock(&pag->pag_ici_lock);
> >  	} else {
> > @@ -979,6 +976,8 @@ xfs_reclaim_inode(
> >  	spin_lock(&ip->i_flags_lock);
> >  	ip->i_flags = XFS_IRECLAIM;
> >  	ip->i_ino = 0;
> > +	ip->i_sick = 0;
> > +	ip->i_checked = 0;
> >  	spin_unlock(&ip->i_flags_lock);
> >  
> >  	xfs_iunlock(ip, XFS_ILOCK_EXCL);
> > 
> 

  reply	other threads:[~2021-06-03 20:41 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-03  3:12 [PATCHSET v2 0/3] xfs: preserve inode health reports for longer Darrick J. Wong
2021-06-03  3:12 ` [PATCH 1/3] xfs: only reset incore inode health state flags when reclaiming an inode Darrick J. Wong
2021-06-03  4:21   ` Dave Chinner
2021-06-03 20:41     ` Darrick J. Wong
2021-06-03 12:22   ` Brian Foster
2021-06-03 20:41     ` Darrick J. Wong [this message]
2021-06-03  3:12 ` [PATCH 2/3] xfs: drop IDONTCACHE on inodes when we mark them sick Darrick J. Wong
2021-06-03  4:34   ` Dave Chinner
2021-06-03 20:49     ` Darrick J. Wong
2021-06-03 12:23   ` Brian Foster
2021-06-03 20:48     ` Darrick J. Wong
2021-06-03  3:12 ` [PATCH 3/3] xfs: don't let background reclaim forget sick inodes Darrick J. Wong
2021-06-03  4:42   ` Dave Chinner
2021-06-03 12:31     ` Brian Foster
2021-06-03 21:30       ` Darrick J. Wong
2021-06-06 17:54 [PATCHSET v3 0/3] xfs: preserve inode health reports for longer Darrick J. Wong
2021-06-06 17:54 ` [PATCH 1/3] xfs: only reset incore inode health state flags when reclaiming an inode Darrick J. Wong
2021-06-08 14:55   ` Carlos Maiolino

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210603204157.GZ26380@locust \
    --to=djwong@kernel.org \
    --cc=bfoster@redhat.com \
    --cc=david@fromorbit.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.