From: "Darrick J. Wong" <email@example.com> To: firstname.lastname@example.org Cc: Christoph Hellwig <email@example.com>, Dave Chinner <firstname.lastname@example.org>, email@example.com, firstname.lastname@example.org, email@example.com Subject: [PATCHSET v6 0/9] xfs: deferred inode inactivation Date: Mon, 07 Jun 2021 15:24:53 -0700 [thread overview] Message-ID: <162310469340.3465262.504398465311182657.stgit@locust> (raw) Hi all, This patch series implements deferred inode inactivation. Inactivation is what happens when an open file loses its last incore reference: if the file has speculative preallocations, they must be freed, and if the file is unlinked, all forks must be truncated, and the inode marked freed in the inode chunk and the inode btrees. Currently, all of this activity is performed in frontend threads when the last in-memory reference is lost and/or the vfs decides to drop the inode. Three complaints stem from this behavior: first, that the time to unlink (in the worst case) depends on both the complexity of the directory as well as the the number of extents in that file; second, that deleting a directory tree is inefficient and seeky because we free the inodes in readdir order, not disk order; and third, the upcoming online repair feature needs to be able to xfs_irele while scanning a filesystem in transaction context. It cannot perform inode inactivation in this context because xfs does not support nested transactions. The implementation will be familiar to those who have studied how XFS scans for reclaimable in-core inodes -- we create a couple more inode state flags to mark an inode as needing inactivation and being in the middle of inactivation. When inodes need inactivation, we set NEED_INACTIVE in iflags, set the INACTIVE radix tree tag, and schedule a deferred work item. The deferred worker runs in an unbounded workqueue, scanning the inode radix tree for tagged inodes to inactivate, and performing all the on-disk metadata updates. Once the inode has been inactivated, it is left in the reclaim state and the background reclaim worker (or direct reclaim) will get to it eventually. Doing the inactivations from kernel threads solves the first problem by constraining the amount of work done by the unlink() call to removing the directory entry. It solves the third problem by moving inactivation to a separate process. Because the inactivations are done in order of inode number, we solve the second problem by performing updates in (we hope) disk order. This also decreases the amount of time it takes to let go of an inode cluster if we're deleting entire directory trees. There are three big warts I can think of in this series: first, because the actual freeing of nlink==0 inodes is now done in the background, this means that the system will be busy making metadata updates for some time after the unlink() call returns. This temporarily reduces available iops. Second, in order to retain the behavior that deleting 100TB of unshared data should result in a free space gain of 100TB, the statvfs and quota reporting ioctls wait for inactivation to finish, which increases the long tail latency of those calls. This behavior is, unfortunately, key to not introducing regressions in fstests. The third problem is that the deferrals keep memory usage higher for longer, reduce opportunities to throttle the frontend when metadata load is heavy, and the unbounded workqueues can create transaction storms. For v5 there are some serious changes against the older versions of this patchset -- we no longer cycle an inode's dquots to avoid fights with quotaoff, and we actually shut down the background gc threads when the filesystem is frozen. v1-v2: NYE patchbombs v3: rebase against 5.12-rc2 for submission. v4: combine the can/has eofblocks predicates, clean up incore inode tree walks, fix inobt deadlock v5: actually freeze the inode gc threads when we freeze the filesystem, consolidate the code that deals with inode tagging, and use foreground inactivation during quotaoff to avoid cycling dquots v6: rebase to 5.13-rc4, fix quotaoff not to require foreground inactivation, refactor to use inode walk goals, use atomic bitflags to control the scheduling of gc workers If you're going to start using this mess, you probably ought to just pull from my git trees, which are linked below. This is an extraordinary way to destroy everything. Enjoy! Comments and questions are, as always, welcome. --D kernel git tree: https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git/log/?h=deferred-inactivation-5.14 --- Documentation/admin-guide/xfs.rst | 10 + fs/xfs/libxfs/xfs_ag.c | 3 fs/xfs/libxfs/xfs_ag.h | 3 fs/xfs/scrub/common.c | 2 fs/xfs/xfs_bmap_util.c | 43 +++ fs/xfs/xfs_fsops.c | 4 fs/xfs/xfs_globals.c | 3 fs/xfs/xfs_icache.c | 596 ++++++++++++++++++++++++++++++++----- fs/xfs/xfs_icache.h | 37 ++ fs/xfs/xfs_inode.c | 60 +++- fs/xfs/xfs_inode.h | 15 + fs/xfs/xfs_itable.c | 42 ++- fs/xfs/xfs_iwalk.c | 33 ++ fs/xfs/xfs_linux.h | 2 fs/xfs/xfs_log_recover.c | 7 fs/xfs/xfs_mount.c | 30 ++ fs/xfs/xfs_mount.h | 16 + fs/xfs/xfs_qm_syscalls.c | 4 fs/xfs/xfs_super.c | 130 +++++++- fs/xfs/xfs_sysctl.c | 9 + fs/xfs/xfs_sysctl.h | 1 fs/xfs/xfs_trace.h | 14 + 22 files changed, 955 insertions(+), 109 deletions(-)
next reply other threads:[~2021-06-07 22:24 UTC|newest] Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-06-07 22:24 Darrick J. Wong [this message] 2021-06-07 22:24 ` [PATCH 1/9] xfs: refactor the inode recycling code Darrick J. Wong 2021-06-07 22:59 ` Dave Chinner 2021-06-08 0:14 ` Darrick J. Wong 2021-06-07 22:25 ` [PATCH 2/9] xfs: deferred inode inactivation Darrick J. Wong 2021-06-08 0:57 ` Dave Chinner 2021-06-08 4:40 ` Darrick J. Wong 2021-06-09 1:01 ` Dave Chinner 2021-06-09 1:28 ` Darrick J. Wong 2021-06-07 22:25 ` [PATCH 3/9] xfs: expose sysfs knob to control inode inactivation delay Darrick J. Wong 2021-06-08 1:09 ` Dave Chinner 2021-06-08 2:02 ` Darrick J. Wong 2021-06-07 22:25 ` [PATCH 4/9] xfs: force inode inactivation and retry fs writes when there isn't space Darrick J. Wong 2021-06-07 22:25 ` [PATCH 5/9] xfs: force inode garbage collection before fallocate when space is low Darrick J. Wong 2021-06-08 1:26 ` Dave Chinner 2021-06-08 11:48 ` Brian Foster 2021-06-08 15:32 ` Darrick J. Wong 2021-06-08 16:06 ` Brian Foster 2021-06-08 21:55 ` Dave Chinner 2021-06-09 0:25 ` Darrick J. Wong 2021-06-07 22:25 ` [PATCH 6/9] xfs: parallelize inode inactivation Darrick J. Wong 2021-06-07 22:25 ` [PATCH 7/9] xfs: create a polled function to force " Darrick J. Wong 2021-06-07 22:25 ` [PATCH 8/9] xfs: don't run speculative preallocation gc when fs is frozen Darrick J. Wong 2021-06-07 22:25 ` [PATCH 9/9] xfs: avoid buffer deadlocks when walking fs inodes Darrick J. Wong
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=162310469340.3465262.504398465311182657.stgit@locust \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --subject='Re: [PATCHSET v6 0/9] xfs: deferred inode inactivation' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.