linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Darrick J. Wong" <darrick.wong@oracle.com>
To: Brian Foster <bfoster@redhat.com>
Cc: Eric Sandeen <sandeen@redhat.com>, xfs <linux-xfs@vger.kernel.org>
Subject: Re: [PATCH v2] xfs_scrub: don't use statvfs to collect filesystem summary counts
Date: Fri, 9 Oct 2020 08:37:41 -0700	[thread overview]
Message-ID: <20201009153741.GT6540@magnolia> (raw)
In-Reply-To: <20201009113225.GB769470@bfoster>

On Fri, Oct 09, 2020 at 07:32:25AM -0400, Brian Foster wrote:
> On Fri, Oct 09, 2020 at 07:18:12AM -0400, Brian Foster wrote:
> > On Mon, Oct 05, 2020 at 09:37:37AM -0700, Darrick J. Wong wrote:
> > > From: Darrick J. Wong <darrick.wong@oracle.com>
> > > 
> > > The function scrub_scan_estimate_blocks naïvely uses the statvfs counts
> > > to estimate the size and free blocks on the data volume.  Unfortunately,
> > > it fails to account for the fact that statvfs can return the size and
> > > free counts for the realtime volume if the root directory has the
> > > rtinherit flag set, which leads to phase 7 reporting totally absurd
> > > quantities.
> > > 
> > > Eric pointed out a further problem with statvfs, which is that the file
> > > counts are clamped to the current user's project quota inode limits.
> > > Therefore, we must not use statvfs for querying the filesystem summary
> > > counts.
> > > 
> > > The XFS_IOC_FSCOUNTS ioctl returns all the data we need, so use that
> > > instead.
> > > 
> > > Fixes: 604dd3345f35 ("xfs_scrub: filesystem counter collection functions")
> > > Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > > ---
> > > v2: drop statvfs entirely
> > > ---
> > 
> > This doesn't seem to apply to for-next..?
> > 
> 
> Oops, never mind. Wrong tree...
> 
> 
> > Brian
> > 
> > >  scrub/fscounters.c |   27 ++++-----------------------
> > >  1 file changed, 4 insertions(+), 23 deletions(-)
> > > 
> > > diff --git a/scrub/fscounters.c b/scrub/fscounters.c
> > > index f9d64f8c008f..e9901fcdf6df 100644
> > > --- a/scrub/fscounters.c
> > > +++ b/scrub/fscounters.c
> > > @@ -130,38 +130,19 @@ scrub_scan_estimate_blocks(
> > >  	unsigned long long		*f_free)
> > >  {
> > >  	struct xfs_fsop_counts		fc;
> > > -	struct xfs_fsop_resblks		rb;
> > > -	struct statvfs			sfs;
> > >  	int				error;
> > >  
> > > -	/* Grab the fstatvfs counters, since it has to report accurately. */
> > > -	error = fstatvfs(ctx->mnt.fd, &sfs);
> > > -	if (error)
> > > -		return errno;
> > > -
> > >  	/* Fetch the filesystem counters. */
> > >  	error = ioctl(ctx->mnt.fd, XFS_IOC_FSCOUNTS, &fc);
> > >  	if (error)
> > >  		return errno;
> > >  
> > > -	/*
> > > -	 * XFS reserves some blocks to prevent hard ENOSPC, so add those
> > > -	 * blocks back to the free data counts.
> > > -	 */
> > > -	error = ioctl(ctx->mnt.fd, XFS_IOC_GET_RESBLKS, &rb);
> > > -	if (error)
> > > -		return errno;
> > > -
> > > -	sfs.f_bfree += rb.resblks_avail;
> > > -
> > > -	*d_blocks = sfs.f_blocks;
> > > -	if (ctx->mnt.fsgeom.logstart > 0)
> > > -		*d_blocks += ctx->mnt.fsgeom.logblocks;
> > > -	*d_bfree = sfs.f_bfree;
> > > +	*d_blocks = ctx->mnt.fsgeom.datablocks;
> > > +	*d_bfree = fc.freedata;
> > >  	*r_blocks = ctx->mnt.fsgeom.rtblocks;
> > >  	*r_bfree = fc.freertx;
> > > -	*f_files = sfs.f_files;
> > > -	*f_free = sfs.f_ffree;
> > > +	*f_files = fc.allocino;
> > > +	*f_free = fc.freeino;
> > >  
> 
> Aren't the free inode counters semantically different between statvfs
> and this ioctl? I thought stat had some logic to effectively show free
> data blocks as free inodes,

It does.

> whereas the ioctl looks like it just reads
> our internal counter (which IIRC is a subset of physically allocated
> inode chunks). Do we care about that semantic here either way?

Nope.  The one caller that cares (scrub/phase7.c) only wants to know the
number of inodes in use (f_free - f_files), which is unaffected by the
logic in xfs_fs_statfs.

I suppose I could trim the parameter list even further to return only
the file count...

--D

> Brian
> 
> > >  	return 0;
> > >  }
> > > 
> > 
> 

  reply	other threads:[~2020-10-09 15:39 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-05 16:37 [PATCH v2] xfs_scrub: don't use statvfs to collect filesystem summary counts Darrick J. Wong
2020-10-09 11:18 ` Brian Foster
2020-10-09 11:32   ` Brian Foster
2020-10-09 15:37     ` Darrick J. Wong [this message]
2020-10-15  8:16       ` Christoph Hellwig
2020-10-12 19:29 ` Eric Sandeen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201009153741.GT6540@magnolia \
    --to=darrick.wong@oracle.com \
    --cc=bfoster@redhat.com \
    --cc=linux-xfs@vger.kernel.org \
    --cc=sandeen@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).