From: Dave Chinner <david@fromorbit.com>
To: "Darrick J. Wong" <djwong@kernel.org>
Cc: linux-xfs@vger.kernel.org, hch@infradead.org
Subject: Re: [PATCH, alternative v2] xfs: per-cpu deferred inode inactivation queues
Date: Wed, 4 Aug 2021 21:09:16 +1000 [thread overview]
Message-ID: <20210804110916.GM2757197@dread.disaster.area> (raw)
In-Reply-To: <20210804032030.GT3601443@magnolia>
On Tue, Aug 03, 2021 at 08:20:30PM -0700, Darrick J. Wong wrote:
> For everyone else following along at home, I've posted the current draft
> version of this whole thing in:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=deferred-inactivation-5.15
Overall looks good - fixes to freeze problems I hit are found
in other replies to this.
I omitted the commits:
xfs: queue inodegc worker immediately when memory is tight
xfs: throttle inode inactivation queuing on memory reclaim
in my test kernel because I think they are unnecessary.
I think the first is unnecessary because reclaim of inodes from the
VFS is usually in large batches and so early triggers aren't
desirable when we're getting thousands of inodes being evicted by
the superblock shrinker at a time. If we've only got a handful of
inodes queued, then inactivating them early isn't going to make much
of an impact on free memory. I could be wrong, but so far I have no
evidence that expediting inactivation is necessary.
The second patch is the custom shrinker. Again, I just don't think
this is necessary because if there is any amount of inactivation of
evicted inodes needed due to reclaim, we'll already be triggering it
to run via the deferred queue flush thresholds. Hence we don't
really need any mechanism to tell us that there is memory pressure;
the deferred work reacts to eviction from reclaim in exactly the
same way it reacts to eviction from unlink....
I've been running the patchset without these two patches on my 512MB
test VM, and the only OOM kill I get from fstests is g/531. This is
the "many open-but-unlinked" test, which creates 50,000 open
unlinked files per CPU. So for this test VM which has 4 CPUs, that's
200,000 open, dirty iunlinked inodes and a lot of pinned inode
cluster buffers. At ~2kB of memory per unlinked inode (ignoring the
cluster buffers) this would consume about 400MB of the 512MB of RAM
the VM has. It OOM kills the test programs that hold the open files
long before it gets to 200,000 files, so this test never passed
before this patchset on this machine...
I have a couple of extra patches to set up per-cpu hotplug
infrastructure before the deferred inode inactivation patch - I'll
post them after I finish this email. I'm going to leave it running
tests overnight.
Darrick, I'm pretty happy with the way the patchset is behaving now.
If you want to fold in the bug fixes I've posted and add in
the hotplug patches, then I think it's ready to be posted in full
again (if it all passes your testing) for review.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
next prev parent reply other threads:[~2021-08-04 11:09 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-29 18:43 [PATCHSET v8 00/20] xfs: deferred inode inactivation Darrick J. Wong
2021-07-29 18:43 ` [PATCH 01/20] xfs: move xfs_inactive call to xfs_inode_mark_reclaimable Darrick J. Wong
2021-07-29 18:44 ` [PATCH 02/20] xfs: detach dquots from inode if we don't need to inactivate it Darrick J. Wong
2021-07-29 18:44 ` [PATCH 03/20] xfs: defer inode inactivation to a workqueue Darrick J. Wong
2021-07-30 4:24 ` Dave Chinner
2021-07-31 4:21 ` Darrick J. Wong
2021-08-01 21:49 ` Dave Chinner
2021-08-01 23:47 ` Dave Chinner
2021-08-03 8:34 ` [PATCH, alternative] xfs: per-cpu deferred inode inactivation queues Dave Chinner
2021-08-03 20:20 ` Darrick J. Wong
2021-08-04 3:20 ` [PATCH, alternative v2] " Darrick J. Wong
2021-08-04 10:03 ` [PATCH] xfs: inodegc needs to stop before freeze Dave Chinner
2021-08-04 12:37 ` Dave Chinner
2021-08-04 10:46 ` [PATCH] xfs: don't run inodegc flushes when inodegc is not active Dave Chinner
2021-08-04 16:20 ` Darrick J. Wong
2021-08-04 11:09 ` Dave Chinner [this message]
2021-08-04 15:59 ` [PATCH, alternative v2] xfs: per-cpu deferred inode inactivation queues Darrick J. Wong
2021-08-04 21:35 ` Dave Chinner
2021-08-04 11:49 ` [PATCH, pre-03/20 #1] xfs: introduce CPU hotplug infrastructure Dave Chinner
2021-08-04 11:50 ` [PATCH, pre-03/20 #2] xfs: introduce all-mounts list for cpu hotplug notifications Dave Chinner
2021-08-04 16:06 ` Darrick J. Wong
2021-08-04 21:17 ` Dave Chinner
2021-08-04 11:52 ` [PATCH, post-03/20 1/1] xfs: hook up inodegc to CPU dead notification Dave Chinner
2021-08-04 16:19 ` Darrick J. Wong
2021-08-04 21:48 ` Dave Chinner
2021-07-29 18:44 ` [PATCH 04/20] xfs: throttle inode inactivation queuing on memory reclaim Darrick J. Wong
2021-07-29 18:44 ` [PATCH 05/20] xfs: don't throttle memory reclaim trying to queue inactive inodes Darrick J. Wong
2021-07-29 18:44 ` [PATCH 06/20] xfs: throttle inodegc queuing on backlog Darrick J. Wong
2021-08-02 0:45 ` Dave Chinner
2021-08-02 1:30 ` Dave Chinner
2021-07-29 18:44 ` [PATCH 07/20] xfs: queue inodegc worker immediately when memory is tight Darrick J. Wong
2021-07-29 18:44 ` [PATCH 08/20] xfs: expose sysfs knob to control inode inactivation delay Darrick J. Wong
2021-07-29 18:44 ` [PATCH 09/20] xfs: reduce inactivation delay when free space is tight Darrick J. Wong
2021-07-29 18:44 ` [PATCH 10/20] xfs: reduce inactivation delay when quota are tight Darrick J. Wong
2021-07-29 18:44 ` [PATCH 11/20] xfs: reduce inactivation delay when realtime extents " Darrick J. Wong
2021-07-29 18:44 ` [PATCH 12/20] xfs: inactivate inodes any time we try to free speculative preallocations Darrick J. Wong
2021-07-29 18:45 ` [PATCH 13/20] xfs: flush inode inactivation work when compiling usage statistics Darrick J. Wong
2021-07-29 18:45 ` [PATCH 14/20] xfs: parallelize inode inactivation Darrick J. Wong
2021-08-02 0:55 ` Dave Chinner
2021-08-02 21:33 ` Darrick J. Wong
2021-07-29 18:45 ` [PATCH 15/20] xfs: reduce inactivation delay when AG free space are tight Darrick J. Wong
2021-07-29 18:45 ` [PATCH 16/20] xfs: queue inodegc worker immediately on backlog Darrick J. Wong
2021-07-29 18:45 ` [PATCH 17/20] xfs: don't run speculative preallocation gc when fs is frozen Darrick J. Wong
2021-07-29 18:45 ` [PATCH 18/20] xfs: scale speculative preallocation gc delay based on free space Darrick J. Wong
2021-07-29 18:45 ` [PATCH 19/20] xfs: use background worker pool when transactions can't get " Darrick J. Wong
2021-07-29 18:45 ` [PATCH 20/20] xfs: avoid buffer deadlocks when walking fs inodes Darrick J. Wong
2021-08-02 10:35 ` [PATCHSET v8 00/20] xfs: deferred inode inactivation Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210804110916.GM2757197@dread.disaster.area \
--to=david@fromorbit.com \
--cc=djwong@kernel.org \
--cc=hch@infradead.org \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).