All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC] [PATCH 0/4] xfs: reactivate inodes immediately in xfs_iget
@ 2024-02-01  0:30 Dave Chinner
  2024-02-01  0:30 ` [PATCH 1/4] xfs: make inode inactivation state changes atomic Dave Chinner
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Dave Chinner @ 2024-02-01  0:30 UTC (permalink / raw)
  To: linux-xfs

Currently xfs_iget() will flush inodes queued for inactivation
rather than recycling them. Flushing the inodegc queues causes
inactivation to run and the inodes transistion to reclaimable where
they can then be recycled. the xfs_iget() code spins for a short
while before trying the lookup again, and will continue to do
so until the inode has moved to the reclaimable state, at which
point it will recycle the inode.

However, if the filesystem is frozen, we cannot flush the inode gc
queues because we can't make modifications during a freeze and so
inodegc is not running. Hence inodes that need inactivation that
they VFS then tries to reference again (e.g. shrinker reclaimed them
just before they were accessed), xfs_iget() will just spin on the
inode waiting for the freeze to go away so the inode can be flushed.

This can be triggered by creating a bunch of files with post-eof
blocks and stalling them on the inodegc queues like so:

# cp a-set-1MB-files* /mnt/xfs
# xfs_io -xr -c "freeze" /mnt/xfs
# echo 3 > /proc/sys/vm/drop_caches
# ls -l /mnt/test

If the timing is just right, then the 'ls -l' will hang spinning
on inodes as they are now sitting in XFS_NEED_INACTIVE state on
the inodegc queues and won't be processed until the filesystem is
thawed.

Instead of flushing the inode, we could just recycle the inode
immediately. That, however, is complicated by the use of lockless
single linked lists for the inodegc queues. We can't just remove
them, so we need to enable lazy removal from the inodegc queue.

To do this, we make the lockless list addition and removal atomic
w.r.t. the inode state changes via the ip->i_flags_lock. This lock
is also held during xfs_iget() lookups, so it serialises the inodegc
list processing against inode lookup as well.

This then enables us to use the XFS_NEED_INACTIVE state flag to
determine if the inode should be inactivated when removing it from
the inodegc list during inodegc work. i.e. the inodegc worker
decides if inactivation should take place, not the context that is
queuing the inode to the inodegc list.

Hence by clearing the XFS_NEED_INACTIVE flag, we can leave inodes on
the inodegc lists and know that they will not be inactivated when
the worker next runs and sees that inode. It will just remove it
from the list and skip over it.

This gives us lazy list removal, and now we can immediately
reactivate the inode during lookup. This is similar to the recycling
of reclaimable inodes, but just a little bit different. I haven't
tried to combine the implementations - it could be done, but I think
that gets in the way of seeing how reactivation is different from
recycling.

By doing this, it means that the above series of operations will no
longer hang waiting for a thaw to occur. Indeed, we can see the
inode recycle stat getting bumped when the above reproducer is run -
it reactivates the inodes instead of hanging:

# xfs_stats.pl | grep recycle
    xs_ig_frecycle.......            75    vn_reclaim............           304
# cp a-set-1MB-files* /mnt/xfs
# xfs_io -xr -c "freeze" /mnt/xfs
# echo 3 > /proc/sys/vm/drop_caches
# ls -l /mnt/test > /dev/null
# xfs_stats.pl | grep recycle
    xs_ig_frecycle.......           100    vn_reclaim............           330
# xfs_io -xr -c "thaw" /mnt/xfs
# rm -rf /mnt/test/a-set*
# umount /mnt/test
#

This is all pretty simple - I don't think I've missed anything and
it runs the full fstests auto group with a few different configs
without regressions.

Comments, thoughts?


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/4] xfs: make inode inactivation state changes atomic
  2024-02-01  0:30 [RFC] [PATCH 0/4] xfs: reactivate inodes immediately in xfs_iget Dave Chinner
@ 2024-02-01  0:30 ` Dave Chinner
  2024-02-01 19:07   ` Darrick J. Wong
  2024-02-01  0:30 ` [PATCH 2/4] xfs: prepare inode for i_gclist detection Dave Chinner
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Dave Chinner @ 2024-02-01  0:30 UTC (permalink / raw)
  To: linux-xfs

From: Dave Chinner <dchinner@redhat.com>

We need the XFS_NEED_INACTIVE flag to correspond to whether the
inode is on the inodegc queues so that we can then use this state
for lazy removal.

To do this, move the addition of the inode to the inodegc queue
under the ip->i_flags_lock so that it is atomic w.r.t. setting
the XFS_NEED_INACTIVE flag.

Then, when we remove the inode from the inodegc list to actually run
inactivation, clear the XFS_NEED_INACTIVE at the same time we are
setting XFS_INACTIVATING to indicate that inactivation is in
progress.

These changes result in all the state changes and inodegc queuing
being atomic w.r.t. each other and inode lookups via the use of the
ip->i_flags lock.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 fs/xfs/xfs_icache.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 06046827b5fe..425b55526386 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -1875,7 +1875,12 @@ xfs_inodegc_worker(
 	llist_for_each_entry_safe(ip, n, node, i_gclist) {
 		int	error;
 
-		xfs_iflags_set(ip, XFS_INACTIVATING);
+		/* Switch state to inactivating. */
+		spin_lock(&ip->i_flags_lock);
+		ip->i_flags |= XFS_INACTIVATING;
+		ip->i_flags &= ~XFS_NEED_INACTIVE;
+		spin_unlock(&ip->i_flags_lock);
+
 		error = xfs_inodegc_inactivate(ip);
 		if (error && !gc->error)
 			gc->error = error;
@@ -2068,9 +2073,13 @@ xfs_inodegc_queue(
 	unsigned long		queue_delay = 1;
 
 	trace_xfs_inode_set_need_inactive(ip);
+
+	/*
+	 * Put the addition of the inode to the gc list under the
+	 * ip->i_flags_lock so that the state change and list addition are
+	 * atomic w.r.t. lookup operations under the ip->i_flags_lock.
+	 */
 	spin_lock(&ip->i_flags_lock);
-	ip->i_flags |= XFS_NEED_INACTIVE;
-	spin_unlock(&ip->i_flags_lock);
 
 	cpu_nr = get_cpu();
 	gc = this_cpu_ptr(mp->m_inodegc);
@@ -2079,6 +2088,9 @@ xfs_inodegc_queue(
 	WRITE_ONCE(gc->items, items + 1);
 	shrinker_hits = READ_ONCE(gc->shrinker_hits);
 
+	ip->i_flags |= XFS_NEED_INACTIVE;
+	spin_unlock(&ip->i_flags_lock);
+
 	/*
 	 * Ensure the list add is always seen by anyone who finds the cpumask
 	 * bit set. This effectively gives the cpumask bit set operation
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/4] xfs: prepare inode for i_gclist detection
  2024-02-01  0:30 [RFC] [PATCH 0/4] xfs: reactivate inodes immediately in xfs_iget Dave Chinner
  2024-02-01  0:30 ` [PATCH 1/4] xfs: make inode inactivation state changes atomic Dave Chinner
@ 2024-02-01  0:30 ` Dave Chinner
  2024-02-01 19:15   ` Darrick J. Wong
  2024-02-01  0:30 ` [PATCH 3/4] xfs: allow lazy removal of inodes from the inodegc queues Dave Chinner
  2024-02-01  0:30 ` [PATCH 4/4] xfs: reactivate XFS_NEED_INACTIVE inodes from xfs_iget Dave Chinner
  3 siblings, 1 reply; 10+ messages in thread
From: Dave Chinner @ 2024-02-01  0:30 UTC (permalink / raw)
  To: linux-xfs

From: Dave Chinner <dchinner@redhat.com>

We currently don't initialise the inode->i_gclist member because it
it not necessary for a pure llist_add/llist_del_all producer-
consumer usage pattern.  However, for lazy removal from the inodegc
list, we need to be able to determine if the inode is already on an
inodegc list before we queue it.

We can do this detection by using llist_on_list(), but this requires
that we initialise the llist_node before we use it, and we
re-initialise it when we remove it from the llist.

Because we already serialise the inodegc list add with inode state
changes under the ip->i_flags_lock, we can do the initialisation on
list removal atomically with the state change. We can also do the
check of whether the inode is already on a inodegc list inside the
state change region on insert.

This gives us the ability to use llist_on_list(ip->i_gclist) to
determine if the inode needs to be queued for inactivation without
having to depend on inode state flags.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 fs/xfs/xfs_icache.c | 21 ++++++++++++++++++---
 1 file changed, 18 insertions(+), 3 deletions(-)

diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 425b55526386..2dd1559aade2 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -114,6 +114,7 @@ xfs_inode_alloc(
 	spin_lock_init(&ip->i_ioend_lock);
 	ip->i_next_unlinked = NULLAGINO;
 	ip->i_prev_unlinked = 0;
+	init_llist_node(&ip->i_gclist);
 
 	return ip;
 }
@@ -1875,10 +1876,16 @@ xfs_inodegc_worker(
 	llist_for_each_entry_safe(ip, n, node, i_gclist) {
 		int	error;
 
-		/* Switch state to inactivating. */
+		/*
+		 * Switch state to inactivating and remove the inode from the
+		 * gclist. This allows the use of llist_on_list() in the queuing
+		 * code to determine if the inode is already on an inodegc
+		 * queue.
+		 */
 		spin_lock(&ip->i_flags_lock);
 		ip->i_flags |= XFS_INACTIVATING;
 		ip->i_flags &= ~XFS_NEED_INACTIVE;
+		init_llist_node(&ip->i_gclist);
 		spin_unlock(&ip->i_flags_lock);
 
 		error = xfs_inodegc_inactivate(ip);
@@ -2075,11 +2082,20 @@ xfs_inodegc_queue(
 	trace_xfs_inode_set_need_inactive(ip);
 
 	/*
-	 * Put the addition of the inode to the gc list under the
+	 * The addition of the inode to the gc list is done under the
 	 * ip->i_flags_lock so that the state change and list addition are
 	 * atomic w.r.t. lookup operations under the ip->i_flags_lock.
+	 * The removal is also done under the ip->i_flags_lock and so this
+	 * allows us to safely use llist_on_list() here to determine if the
+	 * inode is already queued on an inactivation queue.
 	 */
 	spin_lock(&ip->i_flags_lock);
+	ip->i_flags |= XFS_NEED_INACTIVE;
+
+	if (llist_on_list(&ip->i_gclist)) {
+		spin_unlock(&ip->i_flags_lock);
+		return;
+	}
 
 	cpu_nr = get_cpu();
 	gc = this_cpu_ptr(mp->m_inodegc);
@@ -2088,7 +2104,6 @@ xfs_inodegc_queue(
 	WRITE_ONCE(gc->items, items + 1);
 	shrinker_hits = READ_ONCE(gc->shrinker_hits);
 
-	ip->i_flags |= XFS_NEED_INACTIVE;
 	spin_unlock(&ip->i_flags_lock);
 
 	/*
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/4] xfs: allow lazy removal of inodes from the inodegc queues
  2024-02-01  0:30 [RFC] [PATCH 0/4] xfs: reactivate inodes immediately in xfs_iget Dave Chinner
  2024-02-01  0:30 ` [PATCH 1/4] xfs: make inode inactivation state changes atomic Dave Chinner
  2024-02-01  0:30 ` [PATCH 2/4] xfs: prepare inode for i_gclist detection Dave Chinner
@ 2024-02-01  0:30 ` Dave Chinner
  2024-02-01 19:31   ` Darrick J. Wong
  2024-02-01  0:30 ` [PATCH 4/4] xfs: reactivate XFS_NEED_INACTIVE inodes from xfs_iget Dave Chinner
  3 siblings, 1 reply; 10+ messages in thread
From: Dave Chinner @ 2024-02-01  0:30 UTC (permalink / raw)
  To: linux-xfs

From: Dave Chinner <dchinner@redhat.com>

To allow us to recycle inodes that are awaiting inactivation, we
need to enable lazy removal of inodes from the list. Th elist is a
lockless single linked variant, so we can't just remove inodes from
the list at will.

Instead, we can remove them lazily whenever inodegc runs by enabling
the inodegc processing to determine whether inactivation needs to be
done at processing time rather than queuing time.

We've already modified the queuing code to only queue the inode if
it isn't already queued, so here all we need to do is modify the
queue processing to determine if inactivation needs to be done.

Hence we introduce the behaviour that we can cancel inactivation
processing simply by clearing the XFS_NEED_INACTIVE flag on the
inode. Processing will check this flag and skip inactivation
processing if it is not set. The flag is always set at queuing time,
regardless of whether the inode is already one the queues or not.
Hence if it is not set at processing time, it means that something
has cancelled the inactivation and we should just remove it from the
list and then leave it alone.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 fs/xfs/xfs_icache.c | 38 ++++++++++++++++++++++++++++++--------
 1 file changed, 30 insertions(+), 8 deletions(-)

diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 2dd1559aade2..10588f78f679 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -1877,15 +1877,23 @@ xfs_inodegc_worker(
 		int	error;
 
 		/*
-		 * Switch state to inactivating and remove the inode from the
-		 * gclist. This allows the use of llist_on_list() in the queuing
-		 * code to determine if the inode is already on an inodegc
-		 * queue.
+		 * Remove the inode from the gclist and determine if it needs to
+		 * be processed. The XFS_NEED_INACTIVE flag gets cleared if the
+		 * inode is reactivated after queuing, but the list removal is
+		 * lazy and left up to us.
+		 *
+		 * We always remove the inode from the list to allow the use of
+		 * llist_on_list() in the queuing code to determine if the inode
+		 * is already on an inodegc queue.
 		 */
 		spin_lock(&ip->i_flags_lock);
+		init_llist_node(&ip->i_gclist);
+		if (!(ip->i_flags & XFS_NEED_INACTIVE)) {
+			spin_unlock(&ip->i_flags_lock);
+			continue;
+		}
 		ip->i_flags |= XFS_INACTIVATING;
 		ip->i_flags &= ~XFS_NEED_INACTIVE;
-		init_llist_node(&ip->i_gclist);
 		spin_unlock(&ip->i_flags_lock);
 
 		error = xfs_inodegc_inactivate(ip);
@@ -2153,7 +2161,6 @@ xfs_inode_mark_reclaimable(
 	struct xfs_inode	*ip)
 {
 	struct xfs_mount	*mp = ip->i_mount;
-	bool			need_inactive;
 
 	XFS_STATS_INC(mp, vn_reclaim);
 
@@ -2162,8 +2169,23 @@ xfs_inode_mark_reclaimable(
 	 */
 	ASSERT_ALWAYS(!xfs_iflags_test(ip, XFS_ALL_IRECLAIM_FLAGS));
 
-	need_inactive = xfs_inode_needs_inactive(ip);
-	if (need_inactive) {
+	/*
+	 * If the inode is already queued for inactivation because it was
+	 * re-activated and is now being reclaimed again (e.g. fs has been
+	 * frozen for a while) we must ensure that the inode waits for inodegc
+	 * to be run and removes it from the inodegc queue before it moves to
+	 * the reclaimable state and gets freed.
+	 *
+	 * We don't care about races here. We can't race with a list addition
+	 * because only one thread can be evicting the inode from the VFS cache,
+	 * hence false negatives can't occur and we only need to worry about
+	 * list removal races.  If we get a false positive from a list removal
+	 * race, then the inode goes through the inactive list whether it needs
+	 * to or not. This will slow down reclaim of this inode slightly but
+	 * should have no other side effects.
+	 */
+	if (llist_on_list(&ip->i_gclist) ||
+	    xfs_inode_needs_inactive(ip)) {
 		xfs_inodegc_queue(ip);
 		return;
 	}
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/4] xfs: reactivate XFS_NEED_INACTIVE inodes from xfs_iget
  2024-02-01  0:30 [RFC] [PATCH 0/4] xfs: reactivate inodes immediately in xfs_iget Dave Chinner
                   ` (2 preceding siblings ...)
  2024-02-01  0:30 ` [PATCH 3/4] xfs: allow lazy removal of inodes from the inodegc queues Dave Chinner
@ 2024-02-01  0:30 ` Dave Chinner
  2024-02-01 19:36   ` Darrick J. Wong
  2024-02-14  4:00   ` kernel test robot
  3 siblings, 2 replies; 10+ messages in thread
From: Dave Chinner @ 2024-02-01  0:30 UTC (permalink / raw)
  To: linux-xfs

From: Dave Chinner <dchinner@redhat.com>

When xfs_iget() finds an inode that is queued for inactivation, it
issues an inodegc flush to trigger the inactivation work and then
retries the lookup.

However, when the filesystem is frozen, inodegc is turned off and
the flush does nothing and does not block. This results in lookup
spinning on NEED_INACTIVE inodes and being unable to make progress
until the filesystem is thawed. This is less than ideal.

The only reason we can't immediately recycle the inode is that it
queued on a lockless list we can't remove it from. However, those
lists now support lazy removal, and so we can now modify the lookup
code to reactivate inode queued for inactivation. The process is
identical to how we recycle reclaimable inodes from xfs_iget(), so
this ends up being a relatively simple change to make.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 fs/xfs/xfs_icache.c | 98 +++++++++++++++++++++++++++++++++++----------
 1 file changed, 76 insertions(+), 22 deletions(-)

diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 10588f78f679..1fc55ed0692c 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -64,6 +64,8 @@ static int xfs_icwalk_ag(struct xfs_perag *pag,
 					 XFS_ICWALK_FLAG_RECLAIM_SICK | \
 					 XFS_ICWALK_FLAG_UNION)
 
+static void xfs_inodegc_queue(struct xfs_inode *ip);
+
 /*
  * Allocate and initialise an xfs_inode.
  */
@@ -328,6 +330,7 @@ xfs_reinit_inode(
 	return error;
 }
 
+
 /*
  * Carefully nudge an inode whose VFS state has been torn down back into a
  * usable state.  Drops the i_flags_lock and the rcu read lock.
@@ -391,7 +394,71 @@ xfs_iget_recycle(
 	inode->i_state = I_NEW;
 	spin_unlock(&ip->i_flags_lock);
 	spin_unlock(&pag->pag_ici_lock);
+	XFS_STATS_INC(mp, xs_ig_frecycle);
+	return 0;
+}
 
+static int
+xfs_iget_reactivate(
+	struct xfs_perag	*pag,
+	struct xfs_inode	*ip) __releases(&ip->i_flags_lock)
+{
+	struct xfs_mount	*mp = ip->i_mount;
+	struct inode		*inode = VFS_I(ip);
+	int			error;
+
+	trace_xfs_iget_recycle(ip);
+
+	/*
+	 * Take the ILOCK here to serialise against lookup races with putting
+	 * the inode back on the inodegc queue during error handling.
+	 */
+	if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL))
+		return -EAGAIN;
+
+	/*
+	 * Move the state to inactivating so both inactivation and racing
+	 * lookups will skip over this inode until we've finished reactivating
+	 * it and can return it to the XFS_INEW state.
+	 */
+	ip->i_flags &= ~XFS_NEED_INACTIVE;
+	ip->i_flags |= XFS_INACTIVATING;
+	spin_unlock(&ip->i_flags_lock);
+	rcu_read_unlock();
+
+	ASSERT(!rwsem_is_locked(&inode->i_rwsem));
+	error = xfs_reinit_inode(mp, inode);
+	if (error) {
+		/*
+		 * Well, that sucks. Put the inode back on the inactive queue.
+		 * Do this while still under the ILOCK so that we can set the
+		 * NEED_INACTIVE flag and clear the INACTIVATING flag an not
+		 * have another lookup race with us before we've finished
+		 * putting the inode back on the inodegc queue.
+		 */
+		spin_unlock(&ip->i_flags_lock);
+		ip->i_flags |= XFS_NEED_INACTIVE;
+		ip->i_flags &= ~XFS_INACTIVATING;
+		spin_unlock(&ip->i_flags_lock);
+
+		xfs_inodegc_queue(ip);
+		xfs_iunlock(ip, XFS_ILOCK_EXCL);
+
+		return error;
+	}
+	xfs_iunlock(ip, XFS_ILOCK_EXCL);
+
+	/*
+	 * Reset the inode state to new so that xfs_iget() will complete
+	 * the required remaining inode initialisation before it returns the
+	 * inode to the caller.
+	 */
+	spin_lock(&ip->i_flags_lock);
+	ip->i_flags &= ~XFS_IRECLAIM_RESET_FLAGS;
+	ip->i_flags |= XFS_INEW;
+	inode->i_state = I_NEW;
+	spin_unlock(&ip->i_flags_lock);
+	XFS_STATS_INC(mp, xs_ig_frecycle);
 	return 0;
 }
 
@@ -523,14 +590,6 @@ xfs_iget_cache_hit(
 	if (ip->i_flags & (XFS_INEW | XFS_IRECLAIM | XFS_INACTIVATING))
 		goto out_skip;
 
-	if (ip->i_flags & XFS_NEED_INACTIVE) {
-		/* Unlinked inodes cannot be re-grabbed. */
-		if (VFS_I(ip)->i_nlink == 0) {
-			error = -ENOENT;
-			goto out_error;
-		}
-		goto out_inodegc_flush;
-	}
 
 	/*
 	 * Check the inode free state is valid. This also detects lookup
@@ -542,11 +601,18 @@ xfs_iget_cache_hit(
 
 	/* Skip inodes that have no vfs state. */
 	if ((flags & XFS_IGET_INCORE) &&
-	    (ip->i_flags & XFS_IRECLAIMABLE))
+	    (ip->i_flags & (XFS_IRECLAIMABLE | XFS_NEED_INACTIVE)))
 		goto out_skip;
 
 	/* The inode fits the selection criteria; process it. */
-	if (ip->i_flags & XFS_IRECLAIMABLE) {
+	if (ip->i_flags & XFS_NEED_INACTIVE) {
+		/* Drops i_flags_lock and RCU read lock. */
+		error = xfs_iget_reactivate(pag, ip);
+		if (error == -EAGAIN)
+			goto out_skip;
+		if (error)
+			return error;
+	} else if (ip->i_flags & XFS_IRECLAIMABLE) {
 		/* Drops i_flags_lock and RCU read lock. */
 		error = xfs_iget_recycle(pag, ip);
 		if (error == -EAGAIN)
@@ -575,23 +641,11 @@ xfs_iget_cache_hit(
 
 out_skip:
 	trace_xfs_iget_skip(ip);
-	XFS_STATS_INC(mp, xs_ig_frecycle);
 	error = -EAGAIN;
 out_error:
 	spin_unlock(&ip->i_flags_lock);
 	rcu_read_unlock();
 	return error;
-
-out_inodegc_flush:
-	spin_unlock(&ip->i_flags_lock);
-	rcu_read_unlock();
-	/*
-	 * Do not wait for the workers, because the caller could hold an AGI
-	 * buffer lock.  We're just going to sleep in a loop anyway.
-	 */
-	if (xfs_is_inodegc_enabled(mp))
-		xfs_inodegc_queue_all(mp);
-	return -EAGAIN;
 }
 
 static int
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/4] xfs: make inode inactivation state changes atomic
  2024-02-01  0:30 ` [PATCH 1/4] xfs: make inode inactivation state changes atomic Dave Chinner
@ 2024-02-01 19:07   ` Darrick J. Wong
  0 siblings, 0 replies; 10+ messages in thread
From: Darrick J. Wong @ 2024-02-01 19:07 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs

On Thu, Feb 01, 2024 at 11:30:13AM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> We need the XFS_NEED_INACTIVE flag to correspond to whether the
> inode is on the inodegc queues so that we can then use this state
> for lazy removal.
> 
> To do this, move the addition of the inode to the inodegc queue
> under the ip->i_flags_lock so that it is atomic w.r.t. setting
> the XFS_NEED_INACTIVE flag.
> 
> Then, when we remove the inode from the inodegc list to actually run
> inactivation, clear the XFS_NEED_INACTIVE at the same time we are
> setting XFS_INACTIVATING to indicate that inactivation is in
> progress.
> 
> These changes result in all the state changes and inodegc queuing
> being atomic w.r.t. each other and inode lookups via the use of the
> ip->i_flags lock.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/xfs/xfs_icache.c | 18 +++++++++++++++---
>  1 file changed, 15 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> index 06046827b5fe..425b55526386 100644
> --- a/fs/xfs/xfs_icache.c
> +++ b/fs/xfs/xfs_icache.c
> @@ -1875,7 +1875,12 @@ xfs_inodegc_worker(
>  	llist_for_each_entry_safe(ip, n, node, i_gclist) {
>  		int	error;
>  
> -		xfs_iflags_set(ip, XFS_INACTIVATING);
> +		/* Switch state to inactivating. */
> +		spin_lock(&ip->i_flags_lock);
> +		ip->i_flags |= XFS_INACTIVATING;
> +		ip->i_flags &= ~XFS_NEED_INACTIVE;

The comment for XFS_INACTIVATING ought to be updated to state that
NEED_INACTIVE is cleared at the same time that INACTIVATING is set.

> +		spin_unlock(&ip->i_flags_lock);
> +
>  		error = xfs_inodegc_inactivate(ip);
>  		if (error && !gc->error)
>  			gc->error = error;
> @@ -2068,9 +2073,13 @@ xfs_inodegc_queue(
>  	unsigned long		queue_delay = 1;
>  
>  	trace_xfs_inode_set_need_inactive(ip);
> +
> +	/*
> +	 * Put the addition of the inode to the gc list under the
> +	 * ip->i_flags_lock so that the state change and list addition are
> +	 * atomic w.r.t. lookup operations under the ip->i_flags_lock.
> +	 */
>  	spin_lock(&ip->i_flags_lock);
> -	ip->i_flags |= XFS_NEED_INACTIVE;
> -	spin_unlock(&ip->i_flags_lock);
>  
>  	cpu_nr = get_cpu();
>  	gc = this_cpu_ptr(mp->m_inodegc);
> @@ -2079,6 +2088,9 @@ xfs_inodegc_queue(
>  	WRITE_ONCE(gc->items, items + 1);
>  	shrinker_hits = READ_ONCE(gc->shrinker_hits);
>  
> +	ip->i_flags |= XFS_NEED_INACTIVE;
> +	spin_unlock(&ip->i_flags_lock);

This change mostly makes sense to me, but is it necessary to move the
line that sets XFS_NEED_INACTIVE?  This change extends the critical
section so that the llist_add and the flags update are atomic, so
couldn't this change reduce down to moving the spin_unlock call?

(IOWs I'm not sure if there's a subtlety here or if this is merely rough
draft syndrome.)

--D

> +
>  	/*
>  	 * Ensure the list add is always seen by anyone who finds the cpumask
>  	 * bit set. This effectively gives the cpumask bit set operation
> -- 
> 2.43.0
> 
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/4] xfs: prepare inode for i_gclist detection
  2024-02-01  0:30 ` [PATCH 2/4] xfs: prepare inode for i_gclist detection Dave Chinner
@ 2024-02-01 19:15   ` Darrick J. Wong
  0 siblings, 0 replies; 10+ messages in thread
From: Darrick J. Wong @ 2024-02-01 19:15 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs

On Thu, Feb 01, 2024 at 11:30:14AM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> We currently don't initialise the inode->i_gclist member because it
> it not necessary for a pure llist_add/llist_del_all producer-
> consumer usage pattern.  However, for lazy removal from the inodegc
> list, we need to be able to determine if the inode is already on an
> inodegc list before we queue it.
> 
> We can do this detection by using llist_on_list(), but this requires
> that we initialise the llist_node before we use it, and we
> re-initialise it when we remove it from the llist.
> 
> Because we already serialise the inodegc list add with inode state
> changes under the ip->i_flags_lock, we can do the initialisation on
> list removal atomically with the state change. We can also do the
> check of whether the inode is already on a inodegc list inside the
> state change region on insert.
> 
> This gives us the ability to use llist_on_list(ip->i_gclist) to
> determine if the inode needs to be queued for inactivation without
> having to depend on inode state flags.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/xfs/xfs_icache.c | 21 ++++++++++++++++++---
>  1 file changed, 18 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> index 425b55526386..2dd1559aade2 100644
> --- a/fs/xfs/xfs_icache.c
> +++ b/fs/xfs/xfs_icache.c
> @@ -114,6 +114,7 @@ xfs_inode_alloc(
>  	spin_lock_init(&ip->i_ioend_lock);
>  	ip->i_next_unlinked = NULLAGINO;
>  	ip->i_prev_unlinked = 0;
> +	init_llist_node(&ip->i_gclist);
>  
>  	return ip;
>  }
> @@ -1875,10 +1876,16 @@ xfs_inodegc_worker(
>  	llist_for_each_entry_safe(ip, n, node, i_gclist) {
>  		int	error;
>  
> -		/* Switch state to inactivating. */
> +		/*
> +		 * Switch state to inactivating and remove the inode from the
> +		 * gclist. This allows the use of llist_on_list() in the queuing
> +		 * code to determine if the inode is already on an inodegc
> +		 * queue.
> +		 */
>  		spin_lock(&ip->i_flags_lock);
>  		ip->i_flags |= XFS_INACTIVATING;
>  		ip->i_flags &= ~XFS_NEED_INACTIVE;
> +		init_llist_node(&ip->i_gclist);
>  		spin_unlock(&ip->i_flags_lock);
>  
>  		error = xfs_inodegc_inactivate(ip);
> @@ -2075,11 +2082,20 @@ xfs_inodegc_queue(
>  	trace_xfs_inode_set_need_inactive(ip);
>  
>  	/*
> -	 * Put the addition of the inode to the gc list under the
> +	 * The addition of the inode to the gc list is done under the
>  	 * ip->i_flags_lock so that the state change and list addition are
>  	 * atomic w.r.t. lookup operations under the ip->i_flags_lock.
> +	 * The removal is also done under the ip->i_flags_lock and so this
> +	 * allows us to safely use llist_on_list() here to determine if the
> +	 * inode is already queued on an inactivation queue.
>  	 */
>  	spin_lock(&ip->i_flags_lock);
> +	ip->i_flags |= XFS_NEED_INACTIVE;

Oh, I see, the flag setting line moves back.

Other than that everything makes sense here...
Reviewed-by: Darrick J. Wong <djwong@kernel.org>

--D


> +
> +	if (llist_on_list(&ip->i_gclist)) {
> +		spin_unlock(&ip->i_flags_lock);
> +		return;
> +	}
>  
>  	cpu_nr = get_cpu();
>  	gc = this_cpu_ptr(mp->m_inodegc);
> @@ -2088,7 +2104,6 @@ xfs_inodegc_queue(
>  	WRITE_ONCE(gc->items, items + 1);
>  	shrinker_hits = READ_ONCE(gc->shrinker_hits);
>  
> -	ip->i_flags |= XFS_NEED_INACTIVE;
>  	spin_unlock(&ip->i_flags_lock);
>  
>  	/*
> -- 
> 2.43.0
> 
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/4] xfs: allow lazy removal of inodes from the inodegc queues
  2024-02-01  0:30 ` [PATCH 3/4] xfs: allow lazy removal of inodes from the inodegc queues Dave Chinner
@ 2024-02-01 19:31   ` Darrick J. Wong
  0 siblings, 0 replies; 10+ messages in thread
From: Darrick J. Wong @ 2024-02-01 19:31 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs

On Thu, Feb 01, 2024 at 11:30:15AM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> To allow us to recycle inodes that are awaiting inactivation, we
> need to enable lazy removal of inodes from the list. Th elist is a

s/Th elist/The list/

> lockless single linked variant, so we can't just remove inodes from
> the list at will.
> 
> Instead, we can remove them lazily whenever inodegc runs by enabling
> the inodegc processing to determine whether inactivation needs to be
> done at processing time rather than queuing time.
> 
> We've already modified the queuing code to only queue the inode if
> it isn't already queued, so here all we need to do is modify the
> queue processing to determine if inactivation needs to be done.
> 
> Hence we introduce the behaviour that we can cancel inactivation
> processing simply by clearing the XFS_NEED_INACTIVE flag on the
> inode. Processing will check this flag and skip inactivation
> processing if it is not set. The flag is always set at queuing time,
> regardless of whether the inode is already one the queues or not.
> Hence if it is not set at processing time, it means that something
> has cancelled the inactivation and we should just remove it from the
> list and then leave it alone.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/xfs/xfs_icache.c | 38 ++++++++++++++++++++++++++++++--------
>  1 file changed, 30 insertions(+), 8 deletions(-)
> 
> diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> index 2dd1559aade2..10588f78f679 100644
> --- a/fs/xfs/xfs_icache.c
> +++ b/fs/xfs/xfs_icache.c
> @@ -1877,15 +1877,23 @@ xfs_inodegc_worker(
>  		int	error;
>  
>  		/*
> -		 * Switch state to inactivating and remove the inode from the
> -		 * gclist. This allows the use of llist_on_list() in the queuing
> -		 * code to determine if the inode is already on an inodegc
> -		 * queue.
> +		 * Remove the inode from the gclist and determine if it needs to
> +		 * be processed. The XFS_NEED_INACTIVE flag gets cleared if the
> +		 * inode is reactivated after queuing, but the list removal is
> +		 * lazy and left up to us.
> +		 *
> +		 * We always remove the inode from the list to allow the use of
> +		 * llist_on_list() in the queuing code to determine if the inode
> +		 * is already on an inodegc queue.
>  		 */
>  		spin_lock(&ip->i_flags_lock);
> +		init_llist_node(&ip->i_gclist);
> +		if (!(ip->i_flags & XFS_NEED_INACTIVE)) {
> +			spin_unlock(&ip->i_flags_lock);
> +			continue;
> +		}
>  		ip->i_flags |= XFS_INACTIVATING;
>  		ip->i_flags &= ~XFS_NEED_INACTIVE;
> -		init_llist_node(&ip->i_gclist);

Nit: unnecessary churn from the last patch.

So if I understand this correctly, if we think a released inode needs
inactivation, we put it on the percpu gclist and set NEEDS_INACTIVE.
Once it's on there, only the inodegc worker can remove it from that
list.  The novel part here is that now we serialize the i_gclist update
with i_flags_lock, which means that the inodegc worker can observe that
NEEDS_INACTIVE fell off the inode, and ignore the inode.

This sounds pretty similar to the v8 deferred inode inactivation series
where one could untag a NEED_INACTIVE inode to prevent the inodegc
worker from finding it, though now ported to lockless lists that showed
up for v9.

>  		spin_unlock(&ip->i_flags_lock);
>  
>  		error = xfs_inodegc_inactivate(ip);
> @@ -2153,7 +2161,6 @@ xfs_inode_mark_reclaimable(
>  	struct xfs_inode	*ip)
>  {
>  	struct xfs_mount	*mp = ip->i_mount;
> -	bool			need_inactive;
>  
>  	XFS_STATS_INC(mp, vn_reclaim);
>  
> @@ -2162,8 +2169,23 @@ xfs_inode_mark_reclaimable(
>  	 */
>  	ASSERT_ALWAYS(!xfs_iflags_test(ip, XFS_ALL_IRECLAIM_FLAGS));
>  
> -	need_inactive = xfs_inode_needs_inactive(ip);
> -	if (need_inactive) {
> +	/*
> +	 * If the inode is already queued for inactivation because it was
> +	 * re-activated and is now being reclaimed again (e.g. fs has been
> +	 * frozen for a while) we must ensure that the inode waits for inodegc
> +	 * to be run and removes it from the inodegc queue before it moves to
> +	 * the reclaimable state and gets freed.
> +	 *
> +	 * We don't care about races here. We can't race with a list addition
> +	 * because only one thread can be evicting the inode from the VFS cache,
> +	 * hence false negatives can't occur and we only need to worry about
> +	 * list removal races.  If we get a false positive from a list removal
> +	 * race, then the inode goes through the inactive list whether it needs
> +	 * to or not. This will slow down reclaim of this inode slightly but
> +	 * should have no other side effects.

That makes sense to me.

> +	 */
> +	if (llist_on_list(&ip->i_gclist) ||
> +	    xfs_inode_needs_inactive(ip)) {

With the nits fixed,
Reviewed-by: Darrick J. Wong <djwong@kernel.org>

--D

>  		xfs_inodegc_queue(ip);
>  		return;
>  	}
> -- 
> 2.43.0
> 
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/4] xfs: reactivate XFS_NEED_INACTIVE inodes from xfs_iget
  2024-02-01  0:30 ` [PATCH 4/4] xfs: reactivate XFS_NEED_INACTIVE inodes from xfs_iget Dave Chinner
@ 2024-02-01 19:36   ` Darrick J. Wong
  2024-02-14  4:00   ` kernel test robot
  1 sibling, 0 replies; 10+ messages in thread
From: Darrick J. Wong @ 2024-02-01 19:36 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs

On Thu, Feb 01, 2024 at 11:30:16AM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> When xfs_iget() finds an inode that is queued for inactivation, it
> issues an inodegc flush to trigger the inactivation work and then
> retries the lookup.
> 
> However, when the filesystem is frozen, inodegc is turned off and
> the flush does nothing and does not block. This results in lookup
> spinning on NEED_INACTIVE inodes and being unable to make progress
> until the filesystem is thawed. This is less than ideal.
> 
> The only reason we can't immediately recycle the inode is that it
> queued on a lockless list we can't remove it from. However, those
> lists now support lazy removal, and so we can now modify the lookup
> code to reactivate inode queued for inactivation. The process is
> identical to how we recycle reclaimable inodes from xfs_iget(), so
> this ends up being a relatively simple change to make.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/xfs/xfs_icache.c | 98 +++++++++++++++++++++++++++++++++++----------
>  1 file changed, 76 insertions(+), 22 deletions(-)
> 
> diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> index 10588f78f679..1fc55ed0692c 100644
> --- a/fs/xfs/xfs_icache.c
> +++ b/fs/xfs/xfs_icache.c
> @@ -64,6 +64,8 @@ static int xfs_icwalk_ag(struct xfs_perag *pag,
>  					 XFS_ICWALK_FLAG_RECLAIM_SICK | \
>  					 XFS_ICWALK_FLAG_UNION)
>  
> +static void xfs_inodegc_queue(struct xfs_inode *ip);
> +
>  /*
>   * Allocate and initialise an xfs_inode.
>   */
> @@ -328,6 +330,7 @@ xfs_reinit_inode(
>  	return error;
>  }
>  
> +
>  /*
>   * Carefully nudge an inode whose VFS state has been torn down back into a
>   * usable state.  Drops the i_flags_lock and the rcu read lock.
> @@ -391,7 +394,71 @@ xfs_iget_recycle(
>  	inode->i_state = I_NEW;
>  	spin_unlock(&ip->i_flags_lock);
>  	spin_unlock(&pag->pag_ici_lock);
> +	XFS_STATS_INC(mp, xs_ig_frecycle);
> +	return 0;
> +}
>  
> +static int
> +xfs_iget_reactivate(
> +	struct xfs_perag	*pag,
> +	struct xfs_inode	*ip) __releases(&ip->i_flags_lock)
> +{
> +	struct xfs_mount	*mp = ip->i_mount;
> +	struct inode		*inode = VFS_I(ip);
> +	int			error;
> +
> +	trace_xfs_iget_recycle(ip);
> +
> +	/*
> +	 * Take the ILOCK here to serialise against lookup races with putting
> +	 * the inode back on the inodegc queue during error handling.
> +	 */
> +	if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL))
> +		return -EAGAIN;
> +
> +	/*
> +	 * Move the state to inactivating so both inactivation and racing
> +	 * lookups will skip over this inode until we've finished reactivating
> +	 * it and can return it to the XFS_INEW state.
> +	 */
> +	ip->i_flags &= ~XFS_NEED_INACTIVE;
> +	ip->i_flags |= XFS_INACTIVATING;
> +	spin_unlock(&ip->i_flags_lock);
> +	rcu_read_unlock();
> +
> +	ASSERT(!rwsem_is_locked(&inode->i_rwsem));
> +	error = xfs_reinit_inode(mp, inode);
> +	if (error) {
> +		/*
> +		 * Well, that sucks. Put the inode back on the inactive queue.
> +		 * Do this while still under the ILOCK so that we can set the
> +		 * NEED_INACTIVE flag and clear the INACTIVATING flag an not
> +		 * have another lookup race with us before we've finished
> +		 * putting the inode back on the inodegc queue.
> +		 */
> +		spin_unlock(&ip->i_flags_lock);
> +		ip->i_flags |= XFS_NEED_INACTIVE;
> +		ip->i_flags &= ~XFS_INACTIVATING;
> +		spin_unlock(&ip->i_flags_lock);
> +
> +		xfs_inodegc_queue(ip);
> +		xfs_iunlock(ip, XFS_ILOCK_EXCL);
> +
> +		return error;

Needs a trace_xfs_iget_recycle_fail here.

Do we want/need separate tracepoints for reactivation?  I'm guessing not
really since either way (reclaim/inactivation) we're recreating the vfs
state of an inode that hadn't yet been fully zapped.

The code changes here look good to me otherwise.

--D

> +	}
> +	xfs_iunlock(ip, XFS_ILOCK_EXCL);
> +
> +	/*
> +	 * Reset the inode state to new so that xfs_iget() will complete
> +	 * the required remaining inode initialisation before it returns the
> +	 * inode to the caller.
> +	 */
> +	spin_lock(&ip->i_flags_lock);
> +	ip->i_flags &= ~XFS_IRECLAIM_RESET_FLAGS;
> +	ip->i_flags |= XFS_INEW;
> +	inode->i_state = I_NEW;
> +	spin_unlock(&ip->i_flags_lock);
> +	XFS_STATS_INC(mp, xs_ig_frecycle);
>  	return 0;
>  }
>  
> @@ -523,14 +590,6 @@ xfs_iget_cache_hit(
>  	if (ip->i_flags & (XFS_INEW | XFS_IRECLAIM | XFS_INACTIVATING))
>  		goto out_skip;
>  
> -	if (ip->i_flags & XFS_NEED_INACTIVE) {
> -		/* Unlinked inodes cannot be re-grabbed. */
> -		if (VFS_I(ip)->i_nlink == 0) {
> -			error = -ENOENT;
> -			goto out_error;
> -		}
> -		goto out_inodegc_flush;
> -	}
>  
>  	/*
>  	 * Check the inode free state is valid. This also detects lookup
> @@ -542,11 +601,18 @@ xfs_iget_cache_hit(
>  
>  	/* Skip inodes that have no vfs state. */
>  	if ((flags & XFS_IGET_INCORE) &&
> -	    (ip->i_flags & XFS_IRECLAIMABLE))
> +	    (ip->i_flags & (XFS_IRECLAIMABLE | XFS_NEED_INACTIVE)))
>  		goto out_skip;
>  
>  	/* The inode fits the selection criteria; process it. */
> -	if (ip->i_flags & XFS_IRECLAIMABLE) {
> +	if (ip->i_flags & XFS_NEED_INACTIVE) {
> +		/* Drops i_flags_lock and RCU read lock. */
> +		error = xfs_iget_reactivate(pag, ip);
> +		if (error == -EAGAIN)
> +			goto out_skip;
> +		if (error)
> +			return error;
> +	} else if (ip->i_flags & XFS_IRECLAIMABLE) {
>  		/* Drops i_flags_lock and RCU read lock. */
>  		error = xfs_iget_recycle(pag, ip);
>  		if (error == -EAGAIN)
> @@ -575,23 +641,11 @@ xfs_iget_cache_hit(
>  
>  out_skip:
>  	trace_xfs_iget_skip(ip);
> -	XFS_STATS_INC(mp, xs_ig_frecycle);
>  	error = -EAGAIN;
>  out_error:
>  	spin_unlock(&ip->i_flags_lock);
>  	rcu_read_unlock();
>  	return error;
> -
> -out_inodegc_flush:
> -	spin_unlock(&ip->i_flags_lock);
> -	rcu_read_unlock();
> -	/*
> -	 * Do not wait for the workers, because the caller could hold an AGI
> -	 * buffer lock.  We're just going to sleep in a loop anyway.
> -	 */
> -	if (xfs_is_inodegc_enabled(mp))
> -		xfs_inodegc_queue_all(mp);
> -	return -EAGAIN;
>  }
>  
>  static int
> -- 
> 2.43.0
> 
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/4] xfs: reactivate XFS_NEED_INACTIVE inodes from xfs_iget
  2024-02-01  0:30 ` [PATCH 4/4] xfs: reactivate XFS_NEED_INACTIVE inodes from xfs_iget Dave Chinner
  2024-02-01 19:36   ` Darrick J. Wong
@ 2024-02-14  4:00   ` kernel test robot
  1 sibling, 0 replies; 10+ messages in thread
From: kernel test robot @ 2024-02-14  4:00 UTC (permalink / raw)
  To: Dave Chinner; +Cc: oe-lkp, lkp, linux-xfs, oliver.sang



Hello,

kernel test robot noticed "xfstests.xfs.183.fail" on:

commit: 98e62582b2cc1b05a1075ce816256e8f257a6881 ("[PATCH 4/4] xfs: reactivate XFS_NEED_INACTIVE inodes from xfs_iget")
url: https://github.com/intel-lab-lkp/linux/commits/Dave-Chinner/xfs-make-inode-inactivation-state-changes-atomic/20240201-085509
base: https://git.kernel.org/cgit/fs/xfs/xfs-linux.git for-next
patch link: https://lore.kernel.org/all/20240201005217.1011010-5-david@fromorbit.com/
patch subject: [PATCH 4/4] xfs: reactivate XFS_NEED_INACTIVE inodes from xfs_iget

in testcase: xfstests
version: xfstests-x86_64-c46ca4d1-1_20240205
with following parameters:

	disk: 4HDD
	fs: xfs
	test: xfs-183



compiler: gcc-12
test machine: 4 threads Intel(R) Xeon(R) CPU E3-1225 v5 @ 3.30GHz (Skylake) with 16G memory

(please refer to attached dmesg/kmsg for entire log/backtrace)




If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202402141107.4642a3e7-oliver.sang@intel.com

2024-02-08 20:30:36 export TEST_DIR=/fs/sda1
2024-02-08 20:30:36 export TEST_DEV=/dev/sda1
2024-02-08 20:30:36 export FSTYP=xfs
2024-02-08 20:30:36 export SCRATCH_MNT=/fs/scratch
2024-02-08 20:30:36 mkdir /fs/scratch -p
2024-02-08 20:30:36 export SCRATCH_DEV=/dev/sda4
2024-02-08 20:30:36 export SCRATCH_LOGDEV=/dev/sda2
2024-02-08 20:30:36 export SCRATCH_XFS_LIST_METADATA_FIELDS=u3.sfdir3.hdr.parent.i4
2024-02-08 20:30:36 export SCRATCH_XFS_LIST_FUZZ_VERBS=random
2024-02-08 20:30:36 echo xfs/183
2024-02-08 20:30:36 ./check xfs/183
FSTYP         -- xfs (debug)
PLATFORM      -- Linux/x86_64 lkp-skl-d06 6.8.0-rc2-00006-g98e62582b2cc #1 SMP PREEMPT_DYNAMIC Thu Feb  8 18:32:20 CST 2024
MKFS_OPTIONS  -- -f /dev/sda4
MOUNT_OPTIONS -- /dev/sda4 /fs/scratch

xfs/183       - output mismatch (see /lkp/benchmarks/xfstests/results//xfs/183.out.bad)
    --- tests/xfs/183.out	2024-02-05 17:37:40.000000000 +0000
    +++ /lkp/benchmarks/xfstests/results//xfs/183.out.bad	2024-02-08 20:32:06.555798923 +0000
    @@ -1,4 +1,4 @@
     QA output created by 183
     Start original bulkstat_unlink_test with -r switch
     Runing extended checks.
    -Iteration 0 ... (100 files)passed
    +Iteration 0 ... (100 files)ERROR, count(100) != scount(2).
    ...
    (Run 'diff -u /lkp/benchmarks/xfstests/tests/xfs/183.out /lkp/benchmarks/xfstests/results//xfs/183.out.bad'  to see the entire diff)
Ran: xfs/183
Failures: xfs/183
Failed 1 of 1 tests




The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240214/202402141107.4642a3e7-oliver.sang@intel.com



-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2024-02-14  4:01 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-01  0:30 [RFC] [PATCH 0/4] xfs: reactivate inodes immediately in xfs_iget Dave Chinner
2024-02-01  0:30 ` [PATCH 1/4] xfs: make inode inactivation state changes atomic Dave Chinner
2024-02-01 19:07   ` Darrick J. Wong
2024-02-01  0:30 ` [PATCH 2/4] xfs: prepare inode for i_gclist detection Dave Chinner
2024-02-01 19:15   ` Darrick J. Wong
2024-02-01  0:30 ` [PATCH 3/4] xfs: allow lazy removal of inodes from the inodegc queues Dave Chinner
2024-02-01 19:31   ` Darrick J. Wong
2024-02-01  0:30 ` [PATCH 4/4] xfs: reactivate XFS_NEED_INACTIVE inodes from xfs_iget Dave Chinner
2024-02-01 19:36   ` Darrick J. Wong
2024-02-14  4:00   ` kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.