All of lore.kernel.org
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: Dave Chinner <david@fromorbit.com>
Cc: "Darrick J. Wong" <djwong@kernel.org>,
	linux-xfs@vger.kernel.org, hch@infradead.org
Subject: Re: [PATCH 5/9] xfs: force inode garbage collection before fallocate when space is low
Date: Tue, 8 Jun 2021 07:48:05 -0400	[thread overview]
Message-ID: <YL9Y9YM6VtxSnq+c@bfoster> (raw)
In-Reply-To: <20210608012605.GI664593@dread.disaster.area>

On Tue, Jun 08, 2021 at 11:26:05AM +1000, Dave Chinner wrote:
> On Mon, Jun 07, 2021 at 03:25:21PM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <djwong@kernel.org>
> > 
> > Generally speaking, when a user calls fallocate, they're looking to
> > preallocate space in a file in the largest contiguous chunks possible.
> > If free space is low, it's possible that the free space will look
> > unnecessarily fragmented because there are unlinked inodes that are
> > holding on to space that we could allocate.  When this happens,
> > fallocate makes suboptimal allocation decisions for the sake of deleted
> > files, which doesn't make much sense, so scan the filesystem for dead
> > items to delete to try to avoid this.
> > 
> > Note that there are a handful of fstests that fill a filesystem, delete
> > just enough files to allow a single large allocation, and check that
> > fallocate actually gets the allocation.  These tests regress because the
> > test runs fallocate before the inode gc has a chance to run, so add this
> > behavior to maintain as much of the old behavior as possible.
> 
> I don't think this is a good justification for the change. Just
> because the unit tests exploit an undefined behaviour that no
> filesystem actually guarantees to acheive a specific layout, it
> doesn't mean we always have to behave that way.
> 
> For example, many tests used to use reverse sequential writes to
> exploit deficiencies in the allocation algorithms to generate
> fragmented files. We fixed that problem and the tests broke because
> they couldn't fragment files any more.
> 
> Did we reject those changes because the tests broke? No, we didn't
> because the tests were exploiting an observed behaviour rather than
> a guaranteed behaviour.
> 
> So, yeah, "test does X to make Y happen" doesn't mean "X will always
> make Y happen". It just means the test needs to be made more robust,
> or we have to provide a way for the test to trigger the behaviour it
> needs.
> 

Agree on all this..

> Indeed, I think that the way to fix these sorts of issues is to have
> the tests issue a syncfs(2) after they've deleted the inodes and have
> the filesystem run a inodegc flush as part of the sync mechanism.
> 

... but it seems a bit of a leap to equate exploitation of a
historically poorly handled allocation pattern in developer tests with
adding a new requirement (i.e. sync) to achieve optimal behavior of a
fairly common allocation pattern (delete a file, use the space for
something else).

IOW, how to hack around test regressions aside (are the test regressions
actual ENOSPC failures or something else, btw?), what's the impact on
users/workloads that might operate under these conditions? I guess
historically we've always recommended to not consistently operate in
<20% free space conditions, so to some degree there is an expectation
for less than optimal behavior if one decides to constantly bash an fs
into ENOSPC. Then again with large enough files, will/can we put the
filesystem into that state ourselves without any indication to the user?

I kind of wonder if unless/until there's some kind of efficient feedback
between allocation and "pending" free space, whether deferred
inactivation should be an optimization tied to some kind of heuristic
that balances the amount of currently available free space against
pending free space (but I've not combed through the code enough to grok
whether this already does something like that).

Brian

> Then we don't need to do.....
> 
> > +/*
> > + * If the target device (or some part of it) is full enough that it won't to be
> > + * able to satisfy the entire request, try to free inactive files to free up
> > + * space.  While it's perfectly fine to fill a preallocation request with a
> > + * bunch of short extents, we prefer to slow down preallocation requests to
> > + * combat long term fragmentation in new file data.
> > + */
> > +static int
> > +xfs_alloc_consolidate_freespace(
> > +	struct xfs_inode	*ip,
> > +	xfs_filblks_t		wanted)
> > +{
> > +	struct xfs_mount	*mp = ip->i_mount;
> > +	struct xfs_perag	*pag;
> > +	struct xfs_sb		*sbp = &mp->m_sb;
> > +	xfs_agnumber_t		agno;
> > +
> > +	if (!xfs_has_inodegc_work(mp))
> > +		return 0;
> > +
> > +	if (XFS_IS_REALTIME_INODE(ip)) {
> > +		if (sbp->sb_frextents * sbp->sb_rextsize >= wanted)
> > +			return 0;
> > +		goto free_space;
> > +	}
> > +
> > +	for_each_perag(mp, agno, pag) {
> > +		if (pag->pagf_freeblks >= wanted) {
> > +			xfs_perag_put(pag);
> > +			return 0;
> > +		}
> > +	}
> 
> ... really hurty things (e.g. on high AG count fs) on every fallocate()
> call, and we have a simple modification to the tests that allow them
> to work as they want to on both old and new kernels....
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 


  reply	other threads:[~2021-06-08 11:48 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-07 22:24 [PATCHSET v6 0/9] xfs: deferred inode inactivation Darrick J. Wong
2021-06-07 22:24 ` [PATCH 1/9] xfs: refactor the inode recycling code Darrick J. Wong
2021-06-07 22:59   ` Dave Chinner
2021-06-08  0:14     ` Darrick J. Wong
2021-06-07 22:25 ` [PATCH 2/9] xfs: deferred inode inactivation Darrick J. Wong
2021-06-08  0:57   ` Dave Chinner
2021-06-08  4:40     ` Darrick J. Wong
2021-06-09  1:01       ` Dave Chinner
2021-06-09  1:28         ` Darrick J. Wong
2021-06-07 22:25 ` [PATCH 3/9] xfs: expose sysfs knob to control inode inactivation delay Darrick J. Wong
2021-06-08  1:09   ` Dave Chinner
2021-06-08  2:02     ` Darrick J. Wong
2021-06-07 22:25 ` [PATCH 4/9] xfs: force inode inactivation and retry fs writes when there isn't space Darrick J. Wong
2021-06-07 22:25 ` [PATCH 5/9] xfs: force inode garbage collection before fallocate when space is low Darrick J. Wong
2021-06-08  1:26   ` Dave Chinner
2021-06-08 11:48     ` Brian Foster [this message]
2021-06-08 15:32       ` Darrick J. Wong
2021-06-08 16:06         ` Brian Foster
2021-06-08 21:55         ` Dave Chinner
2021-06-09  0:25           ` Darrick J. Wong
2021-06-07 22:25 ` [PATCH 6/9] xfs: parallelize inode inactivation Darrick J. Wong
2021-06-07 22:25 ` [PATCH 7/9] xfs: create a polled function to force " Darrick J. Wong
2021-06-07 22:25 ` [PATCH 8/9] xfs: don't run speculative preallocation gc when fs is frozen Darrick J. Wong
2021-06-07 22:25 ` [PATCH 9/9] xfs: avoid buffer deadlocks when walking fs inodes Darrick J. Wong
  -- strict thread matches above, loose matches on Subject: below --
2021-03-26  0:21 [PATCHSET v5 0/9] xfs: deferred inode inactivation Darrick J. Wong
2021-03-26  0:22 ` [PATCH 5/9] xfs: force inode garbage collection before fallocate when space is low Darrick J. Wong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YL9Y9YM6VtxSnq+c@bfoster \
    --to=bfoster@redhat.com \
    --cc=david@fromorbit.com \
    --cc=djwong@kernel.org \
    --cc=hch@infradead.org \
    --cc=linux-xfs@vger.kernel.org \
    --subject='Re: [PATCH 5/9] xfs: force inode garbage collection before fallocate when space is low' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.