linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/2] xfs: set aside allocation btree blocks from block reservation
@ 2021-03-18 16:17 Brian Foster
  2021-03-18 16:17 ` [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active Brian Foster
                   ` (2 more replies)
  0 siblings, 3 replies; 26+ messages in thread
From: Brian Foster @ 2021-03-18 16:17 UTC (permalink / raw)
  To: linux-xfs

Hi all,

This is v3 of the allocbt block set aside fixup. The primary change in
v3 is to filter out rmapbt blocks from the usage accounting. rmapbt
blocks live in free space similar to allocbt blocks, but are managed
appropriately via perag reservation and so should not be set aside from
reservation requests.

Brian

v3:
- Use a mount flag for easy detection of active perag reservation.
- Filter rmapbt blocks from allocbt block accounting.
v2: https://lore.kernel.org/linux-xfs/20210222152108.896178-1-bfoster@redhat.com/
- Use an atomic counter instead of a percpu counter.
v1: https://lore.kernel.org/linux-xfs/20210217132339.651020-1-bfoster@redhat.com/

Brian Foster (2):
  xfs: set a mount flag when perag reservation is active
  xfs: set aside allocation btree blocks from block reservation

 fs/xfs/libxfs/xfs_ag_resv.c     | 24 ++++++++++++++----------
 fs/xfs/libxfs/xfs_alloc.c       | 12 ++++++++++++
 fs/xfs/libxfs/xfs_alloc_btree.c |  2 ++
 fs/xfs/xfs_mount.c              | 18 +++++++++++++++++-
 fs/xfs/xfs_mount.h              |  7 +++++++
 5 files changed, 52 insertions(+), 11 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active
  2021-03-18 16:17 [PATCH v3 0/2] xfs: set aside allocation btree blocks from block reservation Brian Foster
@ 2021-03-18 16:17 ` Brian Foster
  2021-03-18 20:55   ` Dave Chinner
  2021-03-18 16:17 ` [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation Brian Foster
  2021-04-09 14:17 ` [PATCH v3 0/2] " Brian Foster
  2 siblings, 1 reply; 26+ messages in thread
From: Brian Foster @ 2021-03-18 16:17 UTC (permalink / raw)
  To: linux-xfs

perag reservation is enabled at mount time on a per AG basis. The
upcoming in-core allocation btree accounting mechanism needs to know
when reservation is enabled and that all perag AGF contexts are
initialized. As a preparation step, set a flag in the mount
structure and unconditionally initialize the pagf on all mounts
where at least one reservation is active.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/libxfs/xfs_ag_resv.c | 24 ++++++++++++++----------
 fs/xfs/xfs_mount.h          |  1 +
 2 files changed, 15 insertions(+), 10 deletions(-)

diff --git a/fs/xfs/libxfs/xfs_ag_resv.c b/fs/xfs/libxfs/xfs_ag_resv.c
index fdfe6dc0d307..8e454097b905 100644
--- a/fs/xfs/libxfs/xfs_ag_resv.c
+++ b/fs/xfs/libxfs/xfs_ag_resv.c
@@ -250,6 +250,7 @@ xfs_ag_resv_init(
 	xfs_extlen_t			ask;
 	xfs_extlen_t			used;
 	int				error = 0;
+	bool				has_resv = false;
 
 	/* Create the metadata reservation. */
 	if (pag->pag_meta_resv.ar_asked == 0) {
@@ -287,6 +288,8 @@ xfs_ag_resv_init(
 			if (error)
 				goto out;
 		}
+		if (ask)
+			has_resv = true;
 	}
 
 	/* Create the RMAPBT metadata reservation */
@@ -300,18 +303,19 @@ xfs_ag_resv_init(
 		error = __xfs_ag_resv_init(pag, XFS_AG_RESV_RMAPBT, ask, used);
 		if (error)
 			goto out;
+		if (ask)
+			has_resv = true;
 	}
 
-#ifdef DEBUG
-	/* need to read in the AGF for the ASSERT below to work */
-	error = xfs_alloc_pagf_init(pag->pag_mount, tp, pag->pag_agno, 0);
-	if (error)
-		return error;
-
-	ASSERT(xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved +
-	       xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved <=
-	       pag->pagf_freeblks + pag->pagf_flcount);
-#endif
+	if (has_resv) {
+		mp->m_has_agresv = true;
+		error = xfs_alloc_pagf_init(mp, tp, pag->pag_agno, 0);
+		if (error)
+			return error;
+		ASSERT(xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved +
+		       xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved <=
+		       pag->pagf_freeblks + pag->pagf_flcount);
+	}
 out:
 	return error;
 }
diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
index 659ad95fe3e0..489d9b2c53d9 100644
--- a/fs/xfs/xfs_mount.h
+++ b/fs/xfs/xfs_mount.h
@@ -139,6 +139,7 @@ typedef struct xfs_mount {
 	bool			m_fail_unmount;
 	bool			m_finobt_nores; /* no per-AG finobt resv. */
 	bool			m_update_sb;	/* sb needs update in mount */
+	bool			m_has_agresv;	/* perag reservations active */
 
 	/*
 	 * Bitsets of per-fs metadata that have been checked and/or are sick.
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation
  2021-03-18 16:17 [PATCH v3 0/2] xfs: set aside allocation btree blocks from block reservation Brian Foster
  2021-03-18 16:17 ` [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active Brian Foster
@ 2021-03-18 16:17 ` Brian Foster
  2021-03-18 20:31   ` Darrick J. Wong
  2021-04-09 14:17 ` [PATCH v3 0/2] " Brian Foster
  2 siblings, 1 reply; 26+ messages in thread
From: Brian Foster @ 2021-03-18 16:17 UTC (permalink / raw)
  To: linux-xfs

The blocks used for allocation btrees (bnobt and countbt) are
technically considered free space. This is because as free space is
used, allocbt blocks are removed and naturally become available for
traditional allocation. However, this means that a significant
portion of free space may consist of in-use btree blocks if free
space is severely fragmented.

On large filesystems with large perag reservations, this can lead to
a rare but nasty condition where a significant amount of physical
free space is available, but the majority of actual usable blocks
consist of in-use allocbt blocks. We have a record of a (~12TB, 32
AG) filesystem with multiple AGs in a state with ~2.5GB or so free
blocks tracked across ~300 total allocbt blocks, but effectively at
100% full because the the free space is entirely consumed by
refcountbt perag reservation.

Such a large perag reservation is by design on large filesystems.
The problem is that because the free space is so fragmented, this AG
contributes the 300 or so allocbt blocks to the global counters as
free space. If this pattern repeats across enough AGs, the
filesystem lands in a state where global block reservation can
outrun physical block availability. For example, a streaming
buffered write on the affected filesystem continues to allow delayed
allocation beyond the point where writeback starts to fail due to
physical block allocation failures. The expected behavior is for the
delalloc block reservation to fail gracefully with -ENOSPC before
physical block allocation failure is a possibility.

To address this problem, introduce an in-core counter to track the
sum of all allocbt blocks in use by the filesystem. Use the new
counter to set these blocks aside at reservation time and thus
ensure they cannot be reserved until truly available. Since this is
only necessary when perag reservations are active and the counter
requires a read of each AGF to fully populate, only enforce on perag
res enabled filesystems. This allows initialization of the counter
at ->pagf_init time because the perag reservation init code reads
each AGF at mount time.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/libxfs/xfs_alloc.c       | 12 ++++++++++++
 fs/xfs/libxfs/xfs_alloc_btree.c |  2 ++
 fs/xfs/xfs_mount.c              | 18 +++++++++++++++++-
 fs/xfs/xfs_mount.h              |  6 ++++++
 4 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
index 0c623d3c1036..9fa378d2724e 100644
--- a/fs/xfs/libxfs/xfs_alloc.c
+++ b/fs/xfs/libxfs/xfs_alloc.c
@@ -3036,6 +3036,7 @@ xfs_alloc_read_agf(
 	struct xfs_agf		*agf;		/* ag freelist header */
 	struct xfs_perag	*pag;		/* per allocation group data */
 	int			error;
+	uint32_t		allocbt_blks;
 
 	trace_xfs_alloc_read_agf(mp, agno);
 
@@ -3066,6 +3067,17 @@ xfs_alloc_read_agf(
 		pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level);
 		pag->pagf_init = 1;
 		pag->pagf_agflreset = xfs_agfl_needs_reset(mp, agf);
+
+		/*
+		 * Update the global in-core allocbt block counter. Filter
+		 * rmapbt blocks from the on-disk counter because those are
+		 * managed by perag reservation.
+		 */
+		if (pag->pagf_btreeblks > be32_to_cpu(agf->agf_rmap_blocks)) {
+			allocbt_blks = pag->pagf_btreeblks -
+					be32_to_cpu(agf->agf_rmap_blocks);
+			atomic64_add(allocbt_blks, &mp->m_allocbt_blks);
+		}
 	}
 #ifdef DEBUG
 	else if (!XFS_FORCED_SHUTDOWN(mp)) {
diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c
index 8e01231b308e..9f5a45f7baed 100644
--- a/fs/xfs/libxfs/xfs_alloc_btree.c
+++ b/fs/xfs/libxfs/xfs_alloc_btree.c
@@ -71,6 +71,7 @@ xfs_allocbt_alloc_block(
 		return 0;
 	}
 
+	atomic64_inc(&cur->bc_mp->m_allocbt_blks);
 	xfs_extent_busy_reuse(cur->bc_mp, cur->bc_ag.agno, bno, 1, false);
 
 	xfs_trans_agbtree_delta(cur->bc_tp, 1);
@@ -95,6 +96,7 @@ xfs_allocbt_free_block(
 	if (error)
 		return error;
 
+	atomic64_dec(&cur->bc_mp->m_allocbt_blks);
 	xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1,
 			      XFS_EXTENT_BUSY_SKIP_DISCARD);
 	xfs_trans_agbtree_delta(cur->bc_tp, -1);
diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
index 1c97b155a8ee..29f97e909607 100644
--- a/fs/xfs/xfs_mount.c
+++ b/fs/xfs/xfs_mount.c
@@ -1176,6 +1176,7 @@ xfs_mod_fdblocks(
 	int64_t			lcounter;
 	long long		res_used;
 	s32			batch;
+	uint64_t		set_aside;
 
 	if (delta > 0) {
 		/*
@@ -1215,8 +1216,23 @@ xfs_mod_fdblocks(
 	else
 		batch = XFS_FDBLOCKS_BATCH;
 
+	/*
+	 * Set aside allocbt blocks because these blocks are tracked as free
+	 * space but not available for allocation. Technically this means that a
+	 * single reservation cannot consume all remaining free space, but the
+	 * ratio of allocbt blocks to usable free blocks should be rather small.
+	 * The tradeoff without this is that filesystems that maintain high
+	 * perag block reservations can over reserve physical block availability
+	 * and fail physical allocation, which leads to much more serious
+	 * problems (i.e. transaction abort, pagecache discards, etc.) than
+	 * slightly premature -ENOSPC.
+	 */
+	set_aside = mp->m_alloc_set_aside;
+	if (mp->m_has_agresv)
+		set_aside += atomic64_read(&mp->m_allocbt_blks);
+
 	percpu_counter_add_batch(&mp->m_fdblocks, delta, batch);
-	if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside,
+	if (__percpu_counter_compare(&mp->m_fdblocks, set_aside,
 				     XFS_FDBLOCKS_BATCH) >= 0) {
 		/* we had space! */
 		return 0;
diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
index 489d9b2c53d9..041f437dc117 100644
--- a/fs/xfs/xfs_mount.h
+++ b/fs/xfs/xfs_mount.h
@@ -171,6 +171,12 @@ typedef struct xfs_mount {
 	 * extents or anything related to the rt device.
 	 */
 	struct percpu_counter	m_delalloc_blks;
+	/*
+	 * Global count of allocation btree blocks in use across all AGs. Only
+	 * used when perag reservation is enabled. Helps prevent block
+	 * reservation from attempting to reserve allocation btree blocks.
+	 */
+	atomic64_t		m_allocbt_blks;
 
 	struct radix_tree_root	m_perag_tree;	/* per-ag accounting info */
 	spinlock_t		m_perag_lock;	/* lock for m_perag_tree */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation
  2021-03-18 16:17 ` [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation Brian Foster
@ 2021-03-18 20:31   ` Darrick J. Wong
  2021-03-19 15:00     ` Brian Foster
  0 siblings, 1 reply; 26+ messages in thread
From: Darrick J. Wong @ 2021-03-18 20:31 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-xfs

On Thu, Mar 18, 2021 at 12:17:07PM -0400, Brian Foster wrote:
> The blocks used for allocation btrees (bnobt and countbt) are
> technically considered free space. This is because as free space is
> used, allocbt blocks are removed and naturally become available for
> traditional allocation. However, this means that a significant
> portion of free space may consist of in-use btree blocks if free
> space is severely fragmented.
> 
> On large filesystems with large perag reservations, this can lead to
> a rare but nasty condition where a significant amount of physical
> free space is available, but the majority of actual usable blocks
> consist of in-use allocbt blocks. We have a record of a (~12TB, 32
> AG) filesystem with multiple AGs in a state with ~2.5GB or so free
> blocks tracked across ~300 total allocbt blocks, but effectively at
> 100% full because the the free space is entirely consumed by
> refcountbt perag reservation.
> 
> Such a large perag reservation is by design on large filesystems.
> The problem is that because the free space is so fragmented, this AG
> contributes the 300 or so allocbt blocks to the global counters as
> free space. If this pattern repeats across enough AGs, the
> filesystem lands in a state where global block reservation can
> outrun physical block availability. For example, a streaming
> buffered write on the affected filesystem continues to allow delayed
> allocation beyond the point where writeback starts to fail due to
> physical block allocation failures. The expected behavior is for the
> delalloc block reservation to fail gracefully with -ENOSPC before
> physical block allocation failure is a possibility.
> 
> To address this problem, introduce an in-core counter to track the
> sum of all allocbt blocks in use by the filesystem. Use the new
> counter to set these blocks aside at reservation time and thus
> ensure they cannot be reserved until truly available. Since this is
> only necessary when perag reservations are active and the counter
> requires a read of each AGF to fully populate, only enforce on perag
> res enabled filesystems. This allows initialization of the counter
> at ->pagf_init time because the perag reservation init code reads
> each AGF at mount time.
> 
> Signed-off-by: Brian Foster <bfoster@redhat.com>
> ---
>  fs/xfs/libxfs/xfs_alloc.c       | 12 ++++++++++++
>  fs/xfs/libxfs/xfs_alloc_btree.c |  2 ++
>  fs/xfs/xfs_mount.c              | 18 +++++++++++++++++-
>  fs/xfs/xfs_mount.h              |  6 ++++++
>  4 files changed, 37 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
> index 0c623d3c1036..9fa378d2724e 100644
> --- a/fs/xfs/libxfs/xfs_alloc.c
> +++ b/fs/xfs/libxfs/xfs_alloc.c
> @@ -3036,6 +3036,7 @@ xfs_alloc_read_agf(
>  	struct xfs_agf		*agf;		/* ag freelist header */
>  	struct xfs_perag	*pag;		/* per allocation group data */
>  	int			error;
> +	uint32_t		allocbt_blks;
>  
>  	trace_xfs_alloc_read_agf(mp, agno);
>  
> @@ -3066,6 +3067,17 @@ xfs_alloc_read_agf(
>  		pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level);
>  		pag->pagf_init = 1;
>  		pag->pagf_agflreset = xfs_agfl_needs_reset(mp, agf);
> +
> +		/*
> +		 * Update the global in-core allocbt block counter. Filter
> +		 * rmapbt blocks from the on-disk counter because those are
> +		 * managed by perag reservation.
> +		 */
> +		if (pag->pagf_btreeblks > be32_to_cpu(agf->agf_rmap_blocks)) {
> +			allocbt_blks = pag->pagf_btreeblks -
> +					be32_to_cpu(agf->agf_rmap_blocks);
> +			atomic64_add(allocbt_blks, &mp->m_allocbt_blks);

Does growfs call xfs_alloc_read_agf to set pagf_init in the perag
structure when it adds AGs to the filesystem?  I don't /think/ that's
a problem for this use case (since allocbt_blks should be 0 on a freshly
initialized AG) but i was a little surprised to see block reservation
bits here.

The other thing is that online repair (what little of it is in the
kernel currently) can set pagf_init = 0; is that a problem?

--D

> +		}
>  	}
>  #ifdef DEBUG
>  	else if (!XFS_FORCED_SHUTDOWN(mp)) {
> diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c
> index 8e01231b308e..9f5a45f7baed 100644
> --- a/fs/xfs/libxfs/xfs_alloc_btree.c
> +++ b/fs/xfs/libxfs/xfs_alloc_btree.c
> @@ -71,6 +71,7 @@ xfs_allocbt_alloc_block(
>  		return 0;
>  	}
>  
> +	atomic64_inc(&cur->bc_mp->m_allocbt_blks);
>  	xfs_extent_busy_reuse(cur->bc_mp, cur->bc_ag.agno, bno, 1, false);
>  
>  	xfs_trans_agbtree_delta(cur->bc_tp, 1);
> @@ -95,6 +96,7 @@ xfs_allocbt_free_block(
>  	if (error)
>  		return error;
>  
> +	atomic64_dec(&cur->bc_mp->m_allocbt_blks);
>  	xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1,
>  			      XFS_EXTENT_BUSY_SKIP_DISCARD);
>  	xfs_trans_agbtree_delta(cur->bc_tp, -1);
> diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
> index 1c97b155a8ee..29f97e909607 100644
> --- a/fs/xfs/xfs_mount.c
> +++ b/fs/xfs/xfs_mount.c
> @@ -1176,6 +1176,7 @@ xfs_mod_fdblocks(
>  	int64_t			lcounter;
>  	long long		res_used;
>  	s32			batch;
> +	uint64_t		set_aside;
>  
>  	if (delta > 0) {
>  		/*
> @@ -1215,8 +1216,23 @@ xfs_mod_fdblocks(
>  	else
>  		batch = XFS_FDBLOCKS_BATCH;
>  
> +	/*
> +	 * Set aside allocbt blocks because these blocks are tracked as free
> +	 * space but not available for allocation. Technically this means that a
> +	 * single reservation cannot consume all remaining free space, but the
> +	 * ratio of allocbt blocks to usable free blocks should be rather small.
> +	 * The tradeoff without this is that filesystems that maintain high
> +	 * perag block reservations can over reserve physical block availability
> +	 * and fail physical allocation, which leads to much more serious
> +	 * problems (i.e. transaction abort, pagecache discards, etc.) than
> +	 * slightly premature -ENOSPC.
> +	 */
> +	set_aside = mp->m_alloc_set_aside;
> +	if (mp->m_has_agresv)
> +		set_aside += atomic64_read(&mp->m_allocbt_blks);
> +
>  	percpu_counter_add_batch(&mp->m_fdblocks, delta, batch);
> -	if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside,
> +	if (__percpu_counter_compare(&mp->m_fdblocks, set_aside,
>  				     XFS_FDBLOCKS_BATCH) >= 0) {
>  		/* we had space! */
>  		return 0;
> diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
> index 489d9b2c53d9..041f437dc117 100644
> --- a/fs/xfs/xfs_mount.h
> +++ b/fs/xfs/xfs_mount.h
> @@ -171,6 +171,12 @@ typedef struct xfs_mount {
>  	 * extents or anything related to the rt device.
>  	 */
>  	struct percpu_counter	m_delalloc_blks;
> +	/*
> +	 * Global count of allocation btree blocks in use across all AGs. Only
> +	 * used when perag reservation is enabled. Helps prevent block
> +	 * reservation from attempting to reserve allocation btree blocks.
> +	 */
> +	atomic64_t		m_allocbt_blks;
>  
>  	struct radix_tree_root	m_perag_tree;	/* per-ag accounting info */
>  	spinlock_t		m_perag_lock;	/* lock for m_perag_tree */
> -- 
> 2.26.2
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active
  2021-03-18 16:17 ` [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active Brian Foster
@ 2021-03-18 20:55   ` Dave Chinner
  2021-03-18 22:19     ` Darrick J. Wong
  0 siblings, 1 reply; 26+ messages in thread
From: Dave Chinner @ 2021-03-18 20:55 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-xfs

On Thu, Mar 18, 2021 at 12:17:06PM -0400, Brian Foster wrote:
> perag reservation is enabled at mount time on a per AG basis. The
> upcoming in-core allocation btree accounting mechanism needs to know
> when reservation is enabled and that all perag AGF contexts are
> initialized. As a preparation step, set a flag in the mount
> structure and unconditionally initialize the pagf on all mounts
> where at least one reservation is active.

I'm not sure this is a good idea. AFAICT, this means just about any
filesystem with finobt, reflink and/or rmap will now typically read
every AGF header in the filesystem at mount time. That means pretty
much every v5 filesystem in production...

We've always tried to avoid needing to reading all AG headers at
mount time because that does not scale when we have really large
filesystems (I'm talking petabytes here). We should only read AG
headers if there is something not fully recovered during the mount
(i.e. slow path) and not on every mount.

Needing to do a few thousand synchonous read IOs during mount makes
mount very slow, and as such we always try to do dynamic
instantiation of AG headers...  Testing I've done with exabyte scale
filesystems (>10^6 AGs) show that it can take minutes for mount to
run when each AG header needs to be read, and that's on SSDs where
the individual read latency is only a couple of hundred
microseconds. On spinning disks that can do 200 IOPS, we're
potentially talking hours just to mount really large filesystems...

Hence I don't think that any algorithm that requires reading every
AGF header in the filesystem at mount time on every v5 filesystem
already out there in production (because finobt triggers this) is a
particularly good idea...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active
  2021-03-18 20:55   ` Dave Chinner
@ 2021-03-18 22:19     ` Darrick J. Wong
  2021-03-19  1:05       ` Dave Chinner
  0 siblings, 1 reply; 26+ messages in thread
From: Darrick J. Wong @ 2021-03-18 22:19 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Brian Foster, linux-xfs

On Fri, Mar 19, 2021 at 07:55:36AM +1100, Dave Chinner wrote:
> On Thu, Mar 18, 2021 at 12:17:06PM -0400, Brian Foster wrote:
> > perag reservation is enabled at mount time on a per AG basis. The
> > upcoming in-core allocation btree accounting mechanism needs to know
> > when reservation is enabled and that all perag AGF contexts are
> > initialized. As a preparation step, set a flag in the mount
> > structure and unconditionally initialize the pagf on all mounts
> > where at least one reservation is active.
> 
> I'm not sure this is a good idea. AFAICT, this means just about any
> filesystem with finobt, reflink and/or rmap will now typically read
> every AGF header in the filesystem at mount time. That means pretty
> much every v5 filesystem in production...

They already do that, because the AG headers are where we store the
btree block counts.

> We've always tried to avoid needing to reading all AG headers at
> mount time because that does not scale when we have really large
> filesystems (I'm talking petabytes here). We should only read AG
> headers if there is something not fully recovered during the mount
> (i.e. slow path) and not on every mount.
> 
> Needing to do a few thousand synchonous read IOs during mount makes
> mount very slow, and as such we always try to do dynamic
> instantiation of AG headers...  Testing I've done with exabyte scale
> filesystems (>10^6 AGs) show that it can take minutes for mount to
> run when each AG header needs to be read, and that's on SSDs where
> the individual read latency is only a couple of hundred
> microseconds. On spinning disks that can do 200 IOPS, we're
> potentially talking hours just to mount really large filesystems...

Is that with reflink enabled?  Reflink always scans the right edge of
the refcount btree at mount to clean out stale COW staging extents, and
(prior to the introduction of the inode btree counts feature last year)
we also ahad to walk the entire finobt to find out how big it is.

TBH I think the COW recovery and the AG block reservation pieces are
prime candidates for throwing at an xfs_pwork workqueue so we can
perform those scans in parallel.

> Hence I don't think that any algorithm that requires reading every
> AGF header in the filesystem at mount time on every v5 filesystem
> already out there in production (because finobt triggers this) is a
> particularly good idea...

Perhaps not, but the horse bolted 5 years ago. :/

--D

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active
  2021-03-18 22:19     ` Darrick J. Wong
@ 2021-03-19  1:05       ` Dave Chinner
  2021-03-19  1:34         ` Darrick J. Wong
  2021-03-19  1:43         ` Dave Chinner
  0 siblings, 2 replies; 26+ messages in thread
From: Dave Chinner @ 2021-03-19  1:05 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: Brian Foster, linux-xfs

On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> On Fri, Mar 19, 2021 at 07:55:36AM +1100, Dave Chinner wrote:
> > On Thu, Mar 18, 2021 at 12:17:06PM -0400, Brian Foster wrote:
> > > perag reservation is enabled at mount time on a per AG basis. The
> > > upcoming in-core allocation btree accounting mechanism needs to know
> > > when reservation is enabled and that all perag AGF contexts are
> > > initialized. As a preparation step, set a flag in the mount
> > > structure and unconditionally initialize the pagf on all mounts
> > > where at least one reservation is active.
> > 
> > I'm not sure this is a good idea. AFAICT, this means just about any
> > filesystem with finobt, reflink and/or rmap will now typically read
> > every AGF header in the filesystem at mount time. That means pretty
> > much every v5 filesystem in production...
> 
> They already do that, because the AG headers are where we store the
> btree block counts.

Oh, we're brute forcing AG reservation space? I thought we were
doing something smarter than that, because I'm sure this isn't the
first time I've mentioned this problem....

> > We've always tried to avoid needing to reading all AG headers at
> > mount time because that does not scale when we have really large
> > filesystems (I'm talking petabytes here). We should only read AG
> > headers if there is something not fully recovered during the mount
> > (i.e. slow path) and not on every mount.
> > 
> > Needing to do a few thousand synchonous read IOs during mount makes
> > mount very slow, and as such we always try to do dynamic
> > instantiation of AG headers...  Testing I've done with exabyte scale
> > filesystems (>10^6 AGs) show that it can take minutes for mount to
> > run when each AG header needs to be read, and that's on SSDs where
> > the individual read latency is only a couple of hundred
> > microseconds. On spinning disks that can do 200 IOPS, we're
> > potentially talking hours just to mount really large filesystems...
> 
> Is that with reflink enabled?  Reflink always scans the right edge of
> the refcount btree at mount to clean out stale COW staging extents,

Aren't they cleaned up at unmount when the inode is inactivated?
i.e. isn't this something that should only be done on a unclean
mount?

> and
> (prior to the introduction of the inode btree counts feature last year)
> we also ahad to walk the entire finobt to find out how big it is.

ugh, I forgot about the fact we had to add that wart because we
screwed up the space reservations for finobt operations...

As for large scale testing, I suspect I turned everything optional
off when I last did this testing, because mkfs currently requires a
lot of per-AG IO to initialise structures. On an SSD, mkfs.xfs
-K -f -d agcount=10000 ... takes

		mkfs time	mount time
-m crc=0	15s		1s
-m rmapbt=1	25s		6s

Multiply those times by at another 1000 to get to an 8EB
filesystem and the difference is several hours of mkfs time and
a couple of hours of mount time....

So from the numbers, it is pretty likely I didn't test anything that
actually required iterating 8 million AGs at mount time....

> TBH I think the COW recovery and the AG block reservation pieces are
> prime candidates for throwing at an xfs_pwork workqueue so we can
> perform those scans in parallel.

As I mentioned on #xfs, I think we only need to do the AG read if we
are near enospc. i.e. we can take the entire reservation at mount
time (which is fixed per-ag) and only take away the used from the
reservation (i.e. return to the free space pool) when we actually
access the AGF/AGI the first time. Or when we get a ENOSPC
event, which might occur when we try to take the fixed reservation
at mount time...

> > Hence I don't think that any algorithm that requires reading every
> > AGF header in the filesystem at mount time on every v5 filesystem
> > already out there in production (because finobt triggers this) is a
> > particularly good idea...
> 
> Perhaps not, but the horse bolted 5 years ago. :/

Let's go catch it :P

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active
  2021-03-19  1:05       ` Dave Chinner
@ 2021-03-19  1:34         ` Darrick J. Wong
  2021-03-19 14:54           ` Brian Foster
  2021-03-19  1:43         ` Dave Chinner
  1 sibling, 1 reply; 26+ messages in thread
From: Darrick J. Wong @ 2021-03-19  1:34 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Brian Foster, linux-xfs

On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > On Fri, Mar 19, 2021 at 07:55:36AM +1100, Dave Chinner wrote:
> > > On Thu, Mar 18, 2021 at 12:17:06PM -0400, Brian Foster wrote:
> > > > perag reservation is enabled at mount time on a per AG basis. The
> > > > upcoming in-core allocation btree accounting mechanism needs to know
> > > > when reservation is enabled and that all perag AGF contexts are
> > > > initialized. As a preparation step, set a flag in the mount
> > > > structure and unconditionally initialize the pagf on all mounts
> > > > where at least one reservation is active.
> > > 
> > > I'm not sure this is a good idea. AFAICT, this means just about any
> > > filesystem with finobt, reflink and/or rmap will now typically read
> > > every AGF header in the filesystem at mount time. That means pretty
> > > much every v5 filesystem in production...
> > 
> > They already do that, because the AG headers are where we store the
> > btree block counts.
> 
> Oh, we're brute forcing AG reservation space? I thought we were
> doing something smarter than that, because I'm sure this isn't the
> first time I've mentioned this problem....

Probably not... :)

> > > We've always tried to avoid needing to reading all AG headers at
> > > mount time because that does not scale when we have really large
> > > filesystems (I'm talking petabytes here). We should only read AG
> > > headers if there is something not fully recovered during the mount
> > > (i.e. slow path) and not on every mount.
> > > 
> > > Needing to do a few thousand synchonous read IOs during mount makes
> > > mount very slow, and as such we always try to do dynamic
> > > instantiation of AG headers...  Testing I've done with exabyte scale
> > > filesystems (>10^6 AGs) show that it can take minutes for mount to
> > > run when each AG header needs to be read, and that's on SSDs where
> > > the individual read latency is only a couple of hundred
> > > microseconds. On spinning disks that can do 200 IOPS, we're
> > > potentially talking hours just to mount really large filesystems...
> > 
> > Is that with reflink enabled?  Reflink always scans the right edge of
> > the refcount btree at mount to clean out stale COW staging extents,
> 
> Aren't they cleaned up at unmount when the inode is inactivated?

Yes.  Or when the blockgc timeout expires, or when ENOSPC pushes
blockgc...

> i.e. isn't this something that should only be done on a unclean
> mount?

Years ago (back when reflink was experimental) we left it that way so
that if there were any serious implementation bugs we wouldn't leak
blocks everywhere.  I think we forgot to take it out.

> > and
> > (prior to the introduction of the inode btree counts feature last year)
> > we also ahad to walk the entire finobt to find out how big it is.
> 
> ugh, I forgot about the fact we had to add that wart because we
> screwed up the space reservations for finobt operations...

Yeah.

> As for large scale testing, I suspect I turned everything optional
> off when I last did this testing, because mkfs currently requires a
> lot of per-AG IO to initialise structures. On an SSD, mkfs.xfs
> -K -f -d agcount=10000 ... takes
> 
> 		mkfs time	mount time
> -m crc=0	15s		1s
> -m rmapbt=1	25s		6s
> 
> Multiply those times by at another 1000 to get to an 8EB
> filesystem and the difference is several hours of mkfs time and
> a couple of hours of mount time....
> 
> So from the numbers, it is pretty likely I didn't test anything that
> actually required iterating 8 million AGs at mount time....
> 
> > TBH I think the COW recovery and the AG block reservation pieces are
> > prime candidates for throwing at an xfs_pwork workqueue so we can
> > perform those scans in parallel.

[This didn't turn out to be difficult at all.]

> As I mentioned on #xfs, I think we only need to do the AG read if we
> are near enospc. i.e. we can take the entire reservation at mount
> time (which is fixed per-ag) and only take away the used from the
> reservation (i.e. return to the free space pool) when we actually
> access the AGF/AGI the first time. Or when we get a ENOSPC
> event, which might occur when we try to take the fixed reservation
> at mount time...

<nod> That's probably not hard.  Compute the theoretical maximum size of
the finobt/rmapbt/refcountbt, multiply that by the number of AGs, try to
reserve that much, and if we get it, we can trivially initialise the
per-AG reservation structure.  If that fails, we fall back to the
scanning thing we do now:

When we set pag[if]_init in the per-AG structure, we can back off the
space reservation by the number of blocks in the trees tracked by that
AG header, which will add that quantity to fdblocks.  We can handle the
ENOSPC case by modifying the per-AG blockgc worker to load the AGF/AGI
if they aren't already.

> > > Hence I don't think that any algorithm that requires reading every
> > > AGF header in the filesystem at mount time on every v5 filesystem
> > > already out there in production (because finobt triggers this) is a
> > > particularly good idea...
> > 
> > Perhaps not, but the horse bolted 5 years ago. :/
> 
> Let's go catch it :P

FWIW I previously fixed the rmapbt/reflink transaction reservations
being unnecessarily large, so (provided deferred inode inactivation gets
reviewed this cycle) I can try to put all these reflink cleanups
together for the next cycle.

--D

> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active
  2021-03-19  1:05       ` Dave Chinner
  2021-03-19  1:34         ` Darrick J. Wong
@ 2021-03-19  1:43         ` Dave Chinner
  2021-03-19  1:48           ` Darrick J. Wong
  2021-03-19 14:54           ` Brian Foster
  1 sibling, 2 replies; 26+ messages in thread
From: Dave Chinner @ 2021-03-19  1:43 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: Brian Foster, linux-xfs

On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > TBH I think the COW recovery and the AG block reservation pieces are
> > prime candidates for throwing at an xfs_pwork workqueue so we can
> > perform those scans in parallel.
> 
> As I mentioned on #xfs, I think we only need to do the AG read if we
> are near enospc. i.e. we can take the entire reservation at mount
> time (which is fixed per-ag) and only take away the used from the
> reservation (i.e. return to the free space pool) when we actually
> access the AGF/AGI the first time. Or when we get a ENOSPC
> event, which might occur when we try to take the fixed reservation
> at mount time...

Which leaves the question about when we need to actually do the
accounting needed to fix the bug Brian is trying to fix. Can that be
delayed until we read the AGFs or have an ENOSPC event occur? Or
maybe some other "we are near ENOSPC and haven't read all AGFs yet"
threshold/trigger?

If that's the case, then I'm happy to have this patchset proceed as
it stands under the understanding that there will be follow up to
make the clean, lots of space free mount case avoid reading the the
AG headers.

If it can't be made constrained, then I think we probably need to
come up with a different approach that doesn't require reading every
AG header on every mount...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active
  2021-03-19  1:43         ` Dave Chinner
@ 2021-03-19  1:48           ` Darrick J. Wong
  2021-03-19  2:08             ` Dave Chinner
  2021-03-19 14:54           ` Brian Foster
  1 sibling, 1 reply; 26+ messages in thread
From: Darrick J. Wong @ 2021-03-19  1:48 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Brian Foster, linux-xfs

On Fri, Mar 19, 2021 at 12:43:03PM +1100, Dave Chinner wrote:
> On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > > TBH I think the COW recovery and the AG block reservation pieces are
> > > prime candidates for throwing at an xfs_pwork workqueue so we can
> > > perform those scans in parallel.
> > 
> > As I mentioned on #xfs, I think we only need to do the AG read if we
> > are near enospc. i.e. we can take the entire reservation at mount
> > time (which is fixed per-ag) and only take away the used from the
> > reservation (i.e. return to the free space pool) when we actually
> > access the AGF/AGI the first time. Or when we get a ENOSPC
> > event, which might occur when we try to take the fixed reservation
> > at mount time...
> 
> Which leaves the question about when we need to actually do the
> accounting needed to fix the bug Brian is trying to fix. Can that be
> delayed until we read the AGFs or have an ENOSPC event occur? Or
> maybe some other "we are near ENOSPC and haven't read all AGFs yet"
> threshold/trigger?

Or just load them in the background and let mount() return to userspace?

> If that's the case, then I'm happy to have this patchset proceed as
> it stands under the understanding that there will be follow up to
> make the clean, lots of space free mount case avoid reading the the
> AG headers.
> 
> If it can't be made constrained, then I think we probably need to
> come up with a different approach that doesn't require reading every
> AG header on every mount...
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active
  2021-03-19  1:48           ` Darrick J. Wong
@ 2021-03-19  2:08             ` Dave Chinner
  0 siblings, 0 replies; 26+ messages in thread
From: Dave Chinner @ 2021-03-19  2:08 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: Brian Foster, linux-xfs

On Thu, Mar 18, 2021 at 06:48:21PM -0700, Darrick J. Wong wrote:
> On Fri, Mar 19, 2021 at 12:43:03PM +1100, Dave Chinner wrote:
> > On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> > > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > > > TBH I think the COW recovery and the AG block reservation pieces are
> > > > prime candidates for throwing at an xfs_pwork workqueue so we can
> > > > perform those scans in parallel.
> > > 
> > > As I mentioned on #xfs, I think we only need to do the AG read if we
> > > are near enospc. i.e. we can take the entire reservation at mount
> > > time (which is fixed per-ag) and only take away the used from the
> > > reservation (i.e. return to the free space pool) when we actually
> > > access the AGF/AGI the first time. Or when we get a ENOSPC
> > > event, which might occur when we try to take the fixed reservation
> > > at mount time...
> > 
> > Which leaves the question about when we need to actually do the
> > accounting needed to fix the bug Brian is trying to fix. Can that be
> > delayed until we read the AGFs or have an ENOSPC event occur? Or
> > maybe some other "we are near ENOSPC and haven't read all AGFs yet"
> > threshold/trigger?
> 
> Or just load them in the background and let mount() return to userspace?

Perhaps, but that tends to have impacts on things that run
immediately after mount. e.g. it will screw with benchmarks in
unpredictable ways and I'm not going to like that at all. :(

i.e. I like the deterministic, repeatable behaviour we have right
now because it makes back-to-back performance testing easy to reason
about why performance/behaviour changed...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active
  2021-03-19  1:34         ` Darrick J. Wong
@ 2021-03-19 14:54           ` Brian Foster
  0 siblings, 0 replies; 26+ messages in thread
From: Brian Foster @ 2021-03-19 14:54 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: Dave Chinner, linux-xfs

On Thu, Mar 18, 2021 at 06:34:30PM -0700, Darrick J. Wong wrote:
> On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > > On Fri, Mar 19, 2021 at 07:55:36AM +1100, Dave Chinner wrote:
> > > > On Thu, Mar 18, 2021 at 12:17:06PM -0400, Brian Foster wrote:
> > > > > perag reservation is enabled at mount time on a per AG basis. The
> > > > > upcoming in-core allocation btree accounting mechanism needs to know
> > > > > when reservation is enabled and that all perag AGF contexts are
> > > > > initialized. As a preparation step, set a flag in the mount
> > > > > structure and unconditionally initialize the pagf on all mounts
> > > > > where at least one reservation is active.
> > > > 
> > > > I'm not sure this is a good idea. AFAICT, this means just about any
> > > > filesystem with finobt, reflink and/or rmap will now typically read
> > > > every AGF header in the filesystem at mount time. That means pretty
> > > > much every v5 filesystem in production...
> > > 
> > > They already do that, because the AG headers are where we store the
> > > btree block counts.
> > 
> > Oh, we're brute forcing AG reservation space? I thought we were
> > doing something smarter than that, because I'm sure this isn't the
> > first time I've mentioned this problem....
> 
> Probably not... :)
> 
> > > > We've always tried to avoid needing to reading all AG headers at
> > > > mount time because that does not scale when we have really large
> > > > filesystems (I'm talking petabytes here). We should only read AG
> > > > headers if there is something not fully recovered during the mount
> > > > (i.e. slow path) and not on every mount.
> > > > 
> > > > Needing to do a few thousand synchonous read IOs during mount makes
> > > > mount very slow, and as such we always try to do dynamic
> > > > instantiation of AG headers...  Testing I've done with exabyte scale
> > > > filesystems (>10^6 AGs) show that it can take minutes for mount to
> > > > run when each AG header needs to be read, and that's on SSDs where
> > > > the individual read latency is only a couple of hundred
> > > > microseconds. On spinning disks that can do 200 IOPS, we're
> > > > potentially talking hours just to mount really large filesystems...
> > > 
> > > Is that with reflink enabled?  Reflink always scans the right edge of
> > > the refcount btree at mount to clean out stale COW staging extents,
> > 
> > Aren't they cleaned up at unmount when the inode is inactivated?
> 
> Yes.  Or when the blockgc timeout expires, or when ENOSPC pushes
> blockgc...
> 
> > i.e. isn't this something that should only be done on a unclean
> > mount?
> 
> Years ago (back when reflink was experimental) we left it that way so
> that if there were any serious implementation bugs we wouldn't leak
> blocks everywhere.  I think we forgot to take it out.
> 
> > > and
> > > (prior to the introduction of the inode btree counts feature last year)
> > > we also ahad to walk the entire finobt to find out how big it is.
> > 
> > ugh, I forgot about the fact we had to add that wart because we
> > screwed up the space reservations for finobt operations...
> 
> Yeah.
> 
> > As for large scale testing, I suspect I turned everything optional
> > off when I last did this testing, because mkfs currently requires a
> > lot of per-AG IO to initialise structures. On an SSD, mkfs.xfs
> > -K -f -d agcount=10000 ... takes
> > 
> > 		mkfs time	mount time
> > -m crc=0	15s		1s
> > -m rmapbt=1	25s		6s
> > 
> > Multiply those times by at another 1000 to get to an 8EB
> > filesystem and the difference is several hours of mkfs time and
> > a couple of hours of mount time....
> > 
> > So from the numbers, it is pretty likely I didn't test anything that
> > actually required iterating 8 million AGs at mount time....
> > 
> > > TBH I think the COW recovery and the AG block reservation pieces are
> > > prime candidates for throwing at an xfs_pwork workqueue so we can
> > > perform those scans in parallel.
> 
> [This didn't turn out to be difficult at all.]
> 
> > As I mentioned on #xfs, I think we only need to do the AG read if we
> > are near enospc. i.e. we can take the entire reservation at mount
> > time (which is fixed per-ag) and only take away the used from the
> > reservation (i.e. return to the free space pool) when we actually
> > access the AGF/AGI the first time. Or when we get a ENOSPC
> > event, which might occur when we try to take the fixed reservation
> > at mount time...
> 
> <nod> That's probably not hard.  Compute the theoretical maximum size of
> the finobt/rmapbt/refcountbt, multiply that by the number of AGs, try to
> reserve that much, and if we get it, we can trivially initialise the
> per-AG reservation structure.  If that fails, we fall back to the
> scanning thing we do now:
> 

Even in the failure case, we might be able to limit the mount time
scanning that takes place by just scanning until we've found enough AGs
with consumed reservation such that the mount time estimated reservation
succeeds. Of course, the worst case would always be a full scan (either
due to -ENOSPC on the res or very shortly after mount since the res
might have left the fs near -ENOSPC) so it might not be worth it unless
there's value and the logic is simple..

Brian

> When we set pag[if]_init in the per-AG structure, we can back off the
> space reservation by the number of blocks in the trees tracked by that
> AG header, which will add that quantity to fdblocks.  We can handle the
> ENOSPC case by modifying the per-AG blockgc worker to load the AGF/AGI
> if they aren't already.
> 
> > > > Hence I don't think that any algorithm that requires reading every
> > > > AGF header in the filesystem at mount time on every v5 filesystem
> > > > already out there in production (because finobt triggers this) is a
> > > > particularly good idea...
> > > 
> > > Perhaps not, but the horse bolted 5 years ago. :/
> > 
> > Let's go catch it :P
> 
> FWIW I previously fixed the rmapbt/reflink transaction reservations
> being unnecessarily large, so (provided deferred inode inactivation gets
> reviewed this cycle) I can try to put all these reflink cleanups
> together for the next cycle.
> 
> --D
> 
> > 
> > Cheers,
> > 
> > Dave.
> > -- 
> > Dave Chinner
> > david@fromorbit.com
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active
  2021-03-19  1:43         ` Dave Chinner
  2021-03-19  1:48           ` Darrick J. Wong
@ 2021-03-19 14:54           ` Brian Foster
  2021-03-23 22:40             ` Dave Chinner
  1 sibling, 1 reply; 26+ messages in thread
From: Brian Foster @ 2021-03-19 14:54 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Darrick J. Wong, linux-xfs

On Fri, Mar 19, 2021 at 12:43:03PM +1100, Dave Chinner wrote:
> On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > > TBH I think the COW recovery and the AG block reservation pieces are
> > > prime candidates for throwing at an xfs_pwork workqueue so we can
> > > perform those scans in parallel.
> > 
> > As I mentioned on #xfs, I think we only need to do the AG read if we
> > are near enospc. i.e. we can take the entire reservation at mount
> > time (which is fixed per-ag) and only take away the used from the
> > reservation (i.e. return to the free space pool) when we actually
> > access the AGF/AGI the first time. Or when we get a ENOSPC
> > event, which might occur when we try to take the fixed reservation
> > at mount time...
> 
> Which leaves the question about when we need to actually do the
> accounting needed to fix the bug Brian is trying to fix. Can that be
> delayed until we read the AGFs or have an ENOSPC event occur? Or
> maybe some other "we are near ENOSPC and haven't read all AGFs yet"
> threshold/trigger?
> 

Technically there isn't a hard requirement to read in any AGFs at mount
time. The tradeoff is that leaves a gap in effectiveness until at least
the majority of allocbt blocks have been accounted for (via perag agf
initialization). The in-core counter simply folds into the reservation
set aside value, so it would just remain at 0 at reservation time and
behave as if the mechanism didn't exist in the first place. The obvious
risk is a user can mount the fs and immediately acquire reservation
without having populated the counter from enough AGs to prevent the
reservation overrun problem. For that reason, I didn't really consider
the "lazy" init approach a suitable fix and hooked onto the (mostly)
preexisting perag res behavior to initialize the appropriate structures
at mount time.

If that underlying mount time behavior changes, it's not totally clear
to me how that impacts this patch. If the perag res change relies on an
overestimated mount time reservation and a fallback to a hard scan on
-ENOSPC, then I wonder whether the overestimated reservation might
effectively subsume whatever the allocbt set aside might be for that AG.
If so, and the perag init effectively transfers excess reservation back
to free space at the same time allocbt blocks are accounted for (and set
aside from subsequent reservations), perhaps that has a similar net
effect as the current behavior (of initializing the allocbt count at
mount time)..?

One problem is that might be hard to reason about even with code in
place, let alone right now when the targeted behavior is still
vaporware. OTOH, I suppose that if we do know right now that the perag
res scan will still fall back to mount time scans beyond some low free
space threshold, perhaps it's just a matter of factoring allocbt set
aside into the threshold somehow so that we know the counter will always
be initialized before a user can over reserve blocks. As it is, I don't
really have a strong opinion on whether we should try to make this fix
now and preserve it, or otherwise table it and revisit once we know what
the resulting perag res code will look like. Thoughts?

Brian

> If that's the case, then I'm happy to have this patchset proceed as
> it stands under the understanding that there will be follow up to
> make the clean, lots of space free mount case avoid reading the the
> AG headers.
> 
> If it can't be made constrained, then I think we probably need to
> come up with a different approach that doesn't require reading every
> AG header on every mount...
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation
  2021-03-18 20:31   ` Darrick J. Wong
@ 2021-03-19 15:00     ` Brian Foster
  2021-03-27  1:34       ` Darrick J. Wong
  0 siblings, 1 reply; 26+ messages in thread
From: Brian Foster @ 2021-03-19 15:00 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs

On Thu, Mar 18, 2021 at 01:31:53PM -0700, Darrick J. Wong wrote:
> On Thu, Mar 18, 2021 at 12:17:07PM -0400, Brian Foster wrote:
> > The blocks used for allocation btrees (bnobt and countbt) are
> > technically considered free space. This is because as free space is
> > used, allocbt blocks are removed and naturally become available for
> > traditional allocation. However, this means that a significant
> > portion of free space may consist of in-use btree blocks if free
> > space is severely fragmented.
> > 
> > On large filesystems with large perag reservations, this can lead to
> > a rare but nasty condition where a significant amount of physical
> > free space is available, but the majority of actual usable blocks
> > consist of in-use allocbt blocks. We have a record of a (~12TB, 32
> > AG) filesystem with multiple AGs in a state with ~2.5GB or so free
> > blocks tracked across ~300 total allocbt blocks, but effectively at
> > 100% full because the the free space is entirely consumed by
> > refcountbt perag reservation.
> > 
> > Such a large perag reservation is by design on large filesystems.
> > The problem is that because the free space is so fragmented, this AG
> > contributes the 300 or so allocbt blocks to the global counters as
> > free space. If this pattern repeats across enough AGs, the
> > filesystem lands in a state where global block reservation can
> > outrun physical block availability. For example, a streaming
> > buffered write on the affected filesystem continues to allow delayed
> > allocation beyond the point where writeback starts to fail due to
> > physical block allocation failures. The expected behavior is for the
> > delalloc block reservation to fail gracefully with -ENOSPC before
> > physical block allocation failure is a possibility.
> > 
> > To address this problem, introduce an in-core counter to track the
> > sum of all allocbt blocks in use by the filesystem. Use the new
> > counter to set these blocks aside at reservation time and thus
> > ensure they cannot be reserved until truly available. Since this is
> > only necessary when perag reservations are active and the counter
> > requires a read of each AGF to fully populate, only enforce on perag
> > res enabled filesystems. This allows initialization of the counter
> > at ->pagf_init time because the perag reservation init code reads
> > each AGF at mount time.
> > 
> > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > ---
> >  fs/xfs/libxfs/xfs_alloc.c       | 12 ++++++++++++
> >  fs/xfs/libxfs/xfs_alloc_btree.c |  2 ++
> >  fs/xfs/xfs_mount.c              | 18 +++++++++++++++++-
> >  fs/xfs/xfs_mount.h              |  6 ++++++
> >  4 files changed, 37 insertions(+), 1 deletion(-)
> > 
> > diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
> > index 0c623d3c1036..9fa378d2724e 100644
> > --- a/fs/xfs/libxfs/xfs_alloc.c
> > +++ b/fs/xfs/libxfs/xfs_alloc.c
> > @@ -3036,6 +3036,7 @@ xfs_alloc_read_agf(
> >  	struct xfs_agf		*agf;		/* ag freelist header */
> >  	struct xfs_perag	*pag;		/* per allocation group data */
> >  	int			error;
> > +	uint32_t		allocbt_blks;
> >  
> >  	trace_xfs_alloc_read_agf(mp, agno);
> >  
> > @@ -3066,6 +3067,17 @@ xfs_alloc_read_agf(
> >  		pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level);
> >  		pag->pagf_init = 1;
> >  		pag->pagf_agflreset = xfs_agfl_needs_reset(mp, agf);
> > +
> > +		/*
> > +		 * Update the global in-core allocbt block counter. Filter
> > +		 * rmapbt blocks from the on-disk counter because those are
> > +		 * managed by perag reservation.
> > +		 */
> > +		if (pag->pagf_btreeblks > be32_to_cpu(agf->agf_rmap_blocks)) {
> > +			allocbt_blks = pag->pagf_btreeblks -
> > +					be32_to_cpu(agf->agf_rmap_blocks);
> > +			atomic64_add(allocbt_blks, &mp->m_allocbt_blks);
> 
> Does growfs call xfs_alloc_read_agf to set pagf_init in the perag
> structure when it adds AGs to the filesystem?  I don't /think/ that's
> a problem for this use case (since allocbt_blks should be 0 on a freshly
> initialized AG) but i was a little surprised to see block reservation
> bits here.
> 

I'm not sure it matters who reads AGFs as long as the global counter
remains consistent. For growing an existing AG, it looks like we "free"
the new space into the AG so I think that should be tracked accordingly
like any other alloc/free. For a new AG, it looks like ->agf_btreeblks
starts at 0 and then the perags would likely init through the perag res
call that runs as the final step before the growfs returns.

> The other thing is that online repair (what little of it is in the
> kernel currently) can set pagf_init = 0; is that a problem?
> 

Hmm.. if the AGF is corrupt, it seems likely that the in-core counter is
busted as well. We could do something like account the delta from pre
and post repair into the global counter, but I'd be weary of scenarios
where the AGF might have become inconsistent with the counter somehow
and the delta itself would throw it off. That might be unlikely, but
what scares me about that is we could actually break the global counter
by attempting to fix it so allocbt blocks become reservable again.

I'm not sure there's a great answer here. Perhaps the safest thing would
be to warn about an ->agf_btreeblks inconsistency that might result in
some number of "unusable" blocks until a mount cycle occurs and
resynchronizes the global counter..? That also seems to be consistent
with how we handle the superblock counters after an agf repair.

(All that said, I am somewhat hesitant to add to this unless we
determine the approach is still viable after the expected perag res
changes...).

Brian

> --D
> 
> > +		}
> >  	}
> >  #ifdef DEBUG
> >  	else if (!XFS_FORCED_SHUTDOWN(mp)) {
> > diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c
> > index 8e01231b308e..9f5a45f7baed 100644
> > --- a/fs/xfs/libxfs/xfs_alloc_btree.c
> > +++ b/fs/xfs/libxfs/xfs_alloc_btree.c
> > @@ -71,6 +71,7 @@ xfs_allocbt_alloc_block(
> >  		return 0;
> >  	}
> >  
> > +	atomic64_inc(&cur->bc_mp->m_allocbt_blks);
> >  	xfs_extent_busy_reuse(cur->bc_mp, cur->bc_ag.agno, bno, 1, false);
> >  
> >  	xfs_trans_agbtree_delta(cur->bc_tp, 1);
> > @@ -95,6 +96,7 @@ xfs_allocbt_free_block(
> >  	if (error)
> >  		return error;
> >  
> > +	atomic64_dec(&cur->bc_mp->m_allocbt_blks);
> >  	xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1,
> >  			      XFS_EXTENT_BUSY_SKIP_DISCARD);
> >  	xfs_trans_agbtree_delta(cur->bc_tp, -1);
> > diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
> > index 1c97b155a8ee..29f97e909607 100644
> > --- a/fs/xfs/xfs_mount.c
> > +++ b/fs/xfs/xfs_mount.c
> > @@ -1176,6 +1176,7 @@ xfs_mod_fdblocks(
> >  	int64_t			lcounter;
> >  	long long		res_used;
> >  	s32			batch;
> > +	uint64_t		set_aside;
> >  
> >  	if (delta > 0) {
> >  		/*
> > @@ -1215,8 +1216,23 @@ xfs_mod_fdblocks(
> >  	else
> >  		batch = XFS_FDBLOCKS_BATCH;
> >  
> > +	/*
> > +	 * Set aside allocbt blocks because these blocks are tracked as free
> > +	 * space but not available for allocation. Technically this means that a
> > +	 * single reservation cannot consume all remaining free space, but the
> > +	 * ratio of allocbt blocks to usable free blocks should be rather small.
> > +	 * The tradeoff without this is that filesystems that maintain high
> > +	 * perag block reservations can over reserve physical block availability
> > +	 * and fail physical allocation, which leads to much more serious
> > +	 * problems (i.e. transaction abort, pagecache discards, etc.) than
> > +	 * slightly premature -ENOSPC.
> > +	 */
> > +	set_aside = mp->m_alloc_set_aside;
> > +	if (mp->m_has_agresv)
> > +		set_aside += atomic64_read(&mp->m_allocbt_blks);
> > +
> >  	percpu_counter_add_batch(&mp->m_fdblocks, delta, batch);
> > -	if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside,
> > +	if (__percpu_counter_compare(&mp->m_fdblocks, set_aside,
> >  				     XFS_FDBLOCKS_BATCH) >= 0) {
> >  		/* we had space! */
> >  		return 0;
> > diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
> > index 489d9b2c53d9..041f437dc117 100644
> > --- a/fs/xfs/xfs_mount.h
> > +++ b/fs/xfs/xfs_mount.h
> > @@ -171,6 +171,12 @@ typedef struct xfs_mount {
> >  	 * extents or anything related to the rt device.
> >  	 */
> >  	struct percpu_counter	m_delalloc_blks;
> > +	/*
> > +	 * Global count of allocation btree blocks in use across all AGs. Only
> > +	 * used when perag reservation is enabled. Helps prevent block
> > +	 * reservation from attempting to reserve allocation btree blocks.
> > +	 */
> > +	atomic64_t		m_allocbt_blks;
> >  
> >  	struct radix_tree_root	m_perag_tree;	/* per-ag accounting info */
> >  	spinlock_t		m_perag_lock;	/* lock for m_perag_tree */
> > -- 
> > 2.26.2
> > 
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active
  2021-03-19 14:54           ` Brian Foster
@ 2021-03-23 22:40             ` Dave Chinner
  2021-03-24 14:24               ` Brian Foster
  0 siblings, 1 reply; 26+ messages in thread
From: Dave Chinner @ 2021-03-23 22:40 UTC (permalink / raw)
  To: Brian Foster; +Cc: Darrick J. Wong, linux-xfs

On Fri, Mar 19, 2021 at 10:54:25AM -0400, Brian Foster wrote:
> On Fri, Mar 19, 2021 at 12:43:03PM +1100, Dave Chinner wrote:
> > On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> > > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > > > TBH I think the COW recovery and the AG block reservation pieces are
> > > > prime candidates for throwing at an xfs_pwork workqueue so we can
> > > > perform those scans in parallel.
> > > 
> > > As I mentioned on #xfs, I think we only need to do the AG read if we
> > > are near enospc. i.e. we can take the entire reservation at mount
> > > time (which is fixed per-ag) and only take away the used from the
> > > reservation (i.e. return to the free space pool) when we actually
> > > access the AGF/AGI the first time. Or when we get a ENOSPC
> > > event, which might occur when we try to take the fixed reservation
> > > at mount time...
> > 
> > Which leaves the question about when we need to actually do the
> > accounting needed to fix the bug Brian is trying to fix. Can that be
> > delayed until we read the AGFs or have an ENOSPC event occur? Or
> > maybe some other "we are near ENOSPC and haven't read all AGFs yet"
> > threshold/trigger?
> > 
> 
> Technically there isn't a hard requirement to read in any AGFs at mount
> time. The tradeoff is that leaves a gap in effectiveness until at least
> the majority of allocbt blocks have been accounted for (via perag agf
> initialization). The in-core counter simply folds into the reservation
> set aside value, so it would just remain at 0 at reservation time and
> behave as if the mechanism didn't exist in the first place. The obvious
> risk is a user can mount the fs and immediately acquire reservation
> without having populated the counter from enough AGs to prevent the
> reservation overrun problem. For that reason, I didn't really consider
> the "lazy" init approach a suitable fix and hooked onto the (mostly)
> preexisting perag res behavior to initialize the appropriate structures
> at mount time.
> 
> If that underlying mount time behavior changes, it's not totally clear
> to me how that impacts this patch. If the perag res change relies on an
> overestimated mount time reservation and a fallback to a hard scan on
> -ENOSPC, then I wonder whether the overestimated reservation might
> effectively subsume whatever the allocbt set aside might be for that AG.
> If so, and the perag init effectively transfers excess reservation back
> to free space at the same time allocbt blocks are accounted for (and set
> aside from subsequent reservations), perhaps that has a similar net
> effect as the current behavior (of initializing the allocbt count at
> mount time)..?
> 
> One problem is that might be hard to reason about even with code in
> place, let alone right now when the targeted behavior is still
> vaporware. OTOH, I suppose that if we do know right now that the perag
> res scan will still fall back to mount time scans beyond some low free
> space threshold, perhaps it's just a matter of factoring allocbt set
> aside into the threshold somehow so that we know the counter will always
> be initialized before a user can over reserve blocks.

Yeah, that seems reasonable to me. I don't think it's difficult to
handle - just set the setaside to maximum at mount time, then as we
read in AGFs we replace the maximum setaside for that AG with the
actual btree block usage. If we hit ENOSPC, then we can read in the
uninitialised pags to reduce the setaside from the maximum to the
actual values and return that free space back to the global pool...

> As it is, I don't
> really have a strong opinion on whether we should try to make this fix
> now and preserve it, or otherwise table it and revisit once we know what
> the resulting perag res code will look like. Thoughts?

It sounds like we have a solid plan to address the AG header access
at mount time, adding this code now doesn't make anything worse,
nor does it appear to prevent us from fixing the AG header access
problem in the future. So I'm happy for this fix to go ahead as it
stands.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active
  2021-03-23 22:40             ` Dave Chinner
@ 2021-03-24 14:24               ` Brian Foster
  0 siblings, 0 replies; 26+ messages in thread
From: Brian Foster @ 2021-03-24 14:24 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Darrick J. Wong, linux-xfs

On Wed, Mar 24, 2021 at 09:40:36AM +1100, Dave Chinner wrote:
> On Fri, Mar 19, 2021 at 10:54:25AM -0400, Brian Foster wrote:
> > On Fri, Mar 19, 2021 at 12:43:03PM +1100, Dave Chinner wrote:
> > > On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> > > > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > > > > TBH I think the COW recovery and the AG block reservation pieces are
> > > > > prime candidates for throwing at an xfs_pwork workqueue so we can
> > > > > perform those scans in parallel.
> > > > 
> > > > As I mentioned on #xfs, I think we only need to do the AG read if we
> > > > are near enospc. i.e. we can take the entire reservation at mount
> > > > time (which is fixed per-ag) and only take away the used from the
> > > > reservation (i.e. return to the free space pool) when we actually
> > > > access the AGF/AGI the first time. Or when we get a ENOSPC
> > > > event, which might occur when we try to take the fixed reservation
> > > > at mount time...
> > > 
> > > Which leaves the question about when we need to actually do the
> > > accounting needed to fix the bug Brian is trying to fix. Can that be
> > > delayed until we read the AGFs or have an ENOSPC event occur? Or
> > > maybe some other "we are near ENOSPC and haven't read all AGFs yet"
> > > threshold/trigger?
> > > 
> > 
> > Technically there isn't a hard requirement to read in any AGFs at mount
> > time. The tradeoff is that leaves a gap in effectiveness until at least
> > the majority of allocbt blocks have been accounted for (via perag agf
> > initialization). The in-core counter simply folds into the reservation
> > set aside value, so it would just remain at 0 at reservation time and
> > behave as if the mechanism didn't exist in the first place. The obvious
> > risk is a user can mount the fs and immediately acquire reservation
> > without having populated the counter from enough AGs to prevent the
> > reservation overrun problem. For that reason, I didn't really consider
> > the "lazy" init approach a suitable fix and hooked onto the (mostly)
> > preexisting perag res behavior to initialize the appropriate structures
> > at mount time.
> > 
> > If that underlying mount time behavior changes, it's not totally clear
> > to me how that impacts this patch. If the perag res change relies on an
> > overestimated mount time reservation and a fallback to a hard scan on
> > -ENOSPC, then I wonder whether the overestimated reservation might
> > effectively subsume whatever the allocbt set aside might be for that AG.
> > If so, and the perag init effectively transfers excess reservation back
> > to free space at the same time allocbt blocks are accounted for (and set
> > aside from subsequent reservations), perhaps that has a similar net
> > effect as the current behavior (of initializing the allocbt count at
> > mount time)..?
> > 
> > One problem is that might be hard to reason about even with code in
> > place, let alone right now when the targeted behavior is still
> > vaporware. OTOH, I suppose that if we do know right now that the perag
> > res scan will still fall back to mount time scans beyond some low free
> > space threshold, perhaps it's just a matter of factoring allocbt set
> > aside into the threshold somehow so that we know the counter will always
> > be initialized before a user can over reserve blocks.
> 
> Yeah, that seems reasonable to me. I don't think it's difficult to
> handle - just set the setaside to maximum at mount time, then as we
> read in AGFs we replace the maximum setaside for that AG with the
> actual btree block usage. If we hit ENOSPC, then we can read in the
> uninitialised pags to reduce the setaside from the maximum to the
> actual values and return that free space back to the global pool...
> 

Ack. That seems like a generic enough fallback plan if the
overestimation of perag reservation doesn't otherwise cover the gap.

> > As it is, I don't
> > really have a strong opinion on whether we should try to make this fix
> > now and preserve it, or otherwise table it and revisit once we know what
> > the resulting perag res code will look like. Thoughts?
> 
> It sounds like we have a solid plan to address the AG header access
> at mount time, adding this code now doesn't make anything worse,
> nor does it appear to prevent us from fixing the AG header access
> problem in the future. So I'm happy for this fix to go ahead as it
> stands.
> 

Ok, so is that a Rv-b..? ;)

So far after a quick skim back through the discussion, I don't have a
reason for a v4 of this series...

Brian

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation
  2021-03-19 15:00     ` Brian Foster
@ 2021-03-27  1:34       ` Darrick J. Wong
  2021-03-27 14:51         ` Brian Foster
  0 siblings, 1 reply; 26+ messages in thread
From: Darrick J. Wong @ 2021-03-27  1:34 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-xfs

On Fri, Mar 19, 2021 at 11:00:29AM -0400, Brian Foster wrote:
> On Thu, Mar 18, 2021 at 01:31:53PM -0700, Darrick J. Wong wrote:
> > On Thu, Mar 18, 2021 at 12:17:07PM -0400, Brian Foster wrote:
> > > The blocks used for allocation btrees (bnobt and countbt) are
> > > technically considered free space. This is because as free space is
> > > used, allocbt blocks are removed and naturally become available for
> > > traditional allocation. However, this means that a significant
> > > portion of free space may consist of in-use btree blocks if free
> > > space is severely fragmented.
> > > 
> > > On large filesystems with large perag reservations, this can lead to
> > > a rare but nasty condition where a significant amount of physical
> > > free space is available, but the majority of actual usable blocks
> > > consist of in-use allocbt blocks. We have a record of a (~12TB, 32
> > > AG) filesystem with multiple AGs in a state with ~2.5GB or so free
> > > blocks tracked across ~300 total allocbt blocks, but effectively at
> > > 100% full because the the free space is entirely consumed by
> > > refcountbt perag reservation.
> > > 
> > > Such a large perag reservation is by design on large filesystems.
> > > The problem is that because the free space is so fragmented, this AG
> > > contributes the 300 or so allocbt blocks to the global counters as
> > > free space. If this pattern repeats across enough AGs, the
> > > filesystem lands in a state where global block reservation can
> > > outrun physical block availability. For example, a streaming
> > > buffered write on the affected filesystem continues to allow delayed
> > > allocation beyond the point where writeback starts to fail due to
> > > physical block allocation failures. The expected behavior is for the
> > > delalloc block reservation to fail gracefully with -ENOSPC before
> > > physical block allocation failure is a possibility.
> > > 
> > > To address this problem, introduce an in-core counter to track the
> > > sum of all allocbt blocks in use by the filesystem. Use the new
> > > counter to set these blocks aside at reservation time and thus
> > > ensure they cannot be reserved until truly available. Since this is
> > > only necessary when perag reservations are active and the counter
> > > requires a read of each AGF to fully populate, only enforce on perag
> > > res enabled filesystems. This allows initialization of the counter
> > > at ->pagf_init time because the perag reservation init code reads
> > > each AGF at mount time.
> > > 
> > > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > > ---
> > >  fs/xfs/libxfs/xfs_alloc.c       | 12 ++++++++++++
> > >  fs/xfs/libxfs/xfs_alloc_btree.c |  2 ++
> > >  fs/xfs/xfs_mount.c              | 18 +++++++++++++++++-
> > >  fs/xfs/xfs_mount.h              |  6 ++++++
> > >  4 files changed, 37 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
> > > index 0c623d3c1036..9fa378d2724e 100644
> > > --- a/fs/xfs/libxfs/xfs_alloc.c
> > > +++ b/fs/xfs/libxfs/xfs_alloc.c
> > > @@ -3036,6 +3036,7 @@ xfs_alloc_read_agf(
> > >  	struct xfs_agf		*agf;		/* ag freelist header */
> > >  	struct xfs_perag	*pag;		/* per allocation group data */
> > >  	int			error;
> > > +	uint32_t		allocbt_blks;
> > >  
> > >  	trace_xfs_alloc_read_agf(mp, agno);
> > >  
> > > @@ -3066,6 +3067,17 @@ xfs_alloc_read_agf(
> > >  		pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level);
> > >  		pag->pagf_init = 1;
> > >  		pag->pagf_agflreset = xfs_agfl_needs_reset(mp, agf);
> > > +
> > > +		/*
> > > +		 * Update the global in-core allocbt block counter. Filter
> > > +		 * rmapbt blocks from the on-disk counter because those are
> > > +		 * managed by perag reservation.
> > > +		 */
> > > +		if (pag->pagf_btreeblks > be32_to_cpu(agf->agf_rmap_blocks)) {
> > > +			allocbt_blks = pag->pagf_btreeblks -
> > > +					be32_to_cpu(agf->agf_rmap_blocks);
> > > +			atomic64_add(allocbt_blks, &mp->m_allocbt_blks);
> > 
> > Does growfs call xfs_alloc_read_agf to set pagf_init in the perag
> > structure when it adds AGs to the filesystem?  I don't /think/ that's
> > a problem for this use case (since allocbt_blks should be 0 on a freshly
> > initialized AG) but i was a little surprised to see block reservation
> > bits here.
> > 
> 
> I'm not sure it matters who reads AGFs as long as the global counter
> remains consistent. For growing an existing AG, it looks like we "free"
> the new space into the AG so I think that should be tracked accordingly
> like any other alloc/free. For a new AG, it looks like ->agf_btreeblks
> starts at 0 and then the perags would likely init through the perag res
> call that runs as the final step before the growfs returns.

<nod>

> > The other thing is that online repair (what little of it is in the
> > kernel currently) can set pagf_init = 0; is that a problem?
> > 
> 
> Hmm.. if the AGF is corrupt, it seems likely that the in-core counter is
> busted as well. We could do something like account the delta from pre
> and post repair into the global counter, but I'd be weary of scenarios
> where the AGF might have become inconsistent with the counter somehow

How would the allocbt_blks counter become inconsistent with the AGF?  We
update that incore counter at the same time that we update all the other
ondisk and pag counters, so unless the fs is shut down, we know that
m_allocbt_blks is off by the same amount as the inconsistent AGF.

> and the delta itself would throw it off. That might be unlikely, but
> what scares me about that is we could actually break the global counter
> by attempting to fix it so allocbt blocks become reservable again.

But if you don't even /try/ to fix the counter during an AGF repair,
that almost guarantees that the decisions based on the counter will not
be correct...

> I'm not sure there's a great answer here. Perhaps the safest thing would
> be to warn about an ->agf_btreeblks inconsistency that might result in
> some number of "unusable" blocks until a mount cycle occurs and
> resynchronizes the global counter..? That also seems to be consistent
> with how we handle the superblock counters after an agf repair.

...but OTOH I guess the worst that happens is that we'll ENOSPC early?

--D

> (All that said, I am somewhat hesitant to add to this unless we
> determine the approach is still viable after the expected perag res
> changes...).
> 
> Brian
> 
> > --D
> > 
> > > +		}
> > >  	}
> > >  #ifdef DEBUG
> > >  	else if (!XFS_FORCED_SHUTDOWN(mp)) {
> > > diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c
> > > index 8e01231b308e..9f5a45f7baed 100644
> > > --- a/fs/xfs/libxfs/xfs_alloc_btree.c
> > > +++ b/fs/xfs/libxfs/xfs_alloc_btree.c
> > > @@ -71,6 +71,7 @@ xfs_allocbt_alloc_block(
> > >  		return 0;
> > >  	}
> > >  
> > > +	atomic64_inc(&cur->bc_mp->m_allocbt_blks);
> > >  	xfs_extent_busy_reuse(cur->bc_mp, cur->bc_ag.agno, bno, 1, false);
> > >  
> > >  	xfs_trans_agbtree_delta(cur->bc_tp, 1);
> > > @@ -95,6 +96,7 @@ xfs_allocbt_free_block(
> > >  	if (error)
> > >  		return error;
> > >  
> > > +	atomic64_dec(&cur->bc_mp->m_allocbt_blks);
> > >  	xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1,
> > >  			      XFS_EXTENT_BUSY_SKIP_DISCARD);
> > >  	xfs_trans_agbtree_delta(cur->bc_tp, -1);
> > > diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
> > > index 1c97b155a8ee..29f97e909607 100644
> > > --- a/fs/xfs/xfs_mount.c
> > > +++ b/fs/xfs/xfs_mount.c
> > > @@ -1176,6 +1176,7 @@ xfs_mod_fdblocks(
> > >  	int64_t			lcounter;
> > >  	long long		res_used;
> > >  	s32			batch;
> > > +	uint64_t		set_aside;
> > >  
> > >  	if (delta > 0) {
> > >  		/*
> > > @@ -1215,8 +1216,23 @@ xfs_mod_fdblocks(
> > >  	else
> > >  		batch = XFS_FDBLOCKS_BATCH;
> > >  
> > > +	/*
> > > +	 * Set aside allocbt blocks because these blocks are tracked as free
> > > +	 * space but not available for allocation. Technically this means that a
> > > +	 * single reservation cannot consume all remaining free space, but the
> > > +	 * ratio of allocbt blocks to usable free blocks should be rather small.
> > > +	 * The tradeoff without this is that filesystems that maintain high
> > > +	 * perag block reservations can over reserve physical block availability
> > > +	 * and fail physical allocation, which leads to much more serious
> > > +	 * problems (i.e. transaction abort, pagecache discards, etc.) than
> > > +	 * slightly premature -ENOSPC.
> > > +	 */
> > > +	set_aside = mp->m_alloc_set_aside;
> > > +	if (mp->m_has_agresv)
> > > +		set_aside += atomic64_read(&mp->m_allocbt_blks);
> > > +
> > >  	percpu_counter_add_batch(&mp->m_fdblocks, delta, batch);
> > > -	if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside,
> > > +	if (__percpu_counter_compare(&mp->m_fdblocks, set_aside,
> > >  				     XFS_FDBLOCKS_BATCH) >= 0) {
> > >  		/* we had space! */
> > >  		return 0;
> > > diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
> > > index 489d9b2c53d9..041f437dc117 100644
> > > --- a/fs/xfs/xfs_mount.h
> > > +++ b/fs/xfs/xfs_mount.h
> > > @@ -171,6 +171,12 @@ typedef struct xfs_mount {
> > >  	 * extents or anything related to the rt device.
> > >  	 */
> > >  	struct percpu_counter	m_delalloc_blks;
> > > +	/*
> > > +	 * Global count of allocation btree blocks in use across all AGs. Only
> > > +	 * used when perag reservation is enabled. Helps prevent block
> > > +	 * reservation from attempting to reserve allocation btree blocks.
> > > +	 */
> > > +	atomic64_t		m_allocbt_blks;
> > >  
> > >  	struct radix_tree_root	m_perag_tree;	/* per-ag accounting info */
> > >  	spinlock_t		m_perag_lock;	/* lock for m_perag_tree */
> > > -- 
> > > 2.26.2
> > > 
> > 
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation
  2021-03-27  1:34       ` Darrick J. Wong
@ 2021-03-27 14:51         ` Brian Foster
  0 siblings, 0 replies; 26+ messages in thread
From: Brian Foster @ 2021-03-27 14:51 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs

On Fri, Mar 26, 2021 at 06:34:22PM -0700, Darrick J. Wong wrote:
> On Fri, Mar 19, 2021 at 11:00:29AM -0400, Brian Foster wrote:
> > On Thu, Mar 18, 2021 at 01:31:53PM -0700, Darrick J. Wong wrote:
> > > On Thu, Mar 18, 2021 at 12:17:07PM -0400, Brian Foster wrote:
> > > > The blocks used for allocation btrees (bnobt and countbt) are
> > > > technically considered free space. This is because as free space is
> > > > used, allocbt blocks are removed and naturally become available for
> > > > traditional allocation. However, this means that a significant
> > > > portion of free space may consist of in-use btree blocks if free
> > > > space is severely fragmented.
> > > > 
> > > > On large filesystems with large perag reservations, this can lead to
> > > > a rare but nasty condition where a significant amount of physical
> > > > free space is available, but the majority of actual usable blocks
> > > > consist of in-use allocbt blocks. We have a record of a (~12TB, 32
> > > > AG) filesystem with multiple AGs in a state with ~2.5GB or so free
> > > > blocks tracked across ~300 total allocbt blocks, but effectively at
> > > > 100% full because the the free space is entirely consumed by
> > > > refcountbt perag reservation.
> > > > 
> > > > Such a large perag reservation is by design on large filesystems.
> > > > The problem is that because the free space is so fragmented, this AG
> > > > contributes the 300 or so allocbt blocks to the global counters as
> > > > free space. If this pattern repeats across enough AGs, the
> > > > filesystem lands in a state where global block reservation can
> > > > outrun physical block availability. For example, a streaming
> > > > buffered write on the affected filesystem continues to allow delayed
> > > > allocation beyond the point where writeback starts to fail due to
> > > > physical block allocation failures. The expected behavior is for the
> > > > delalloc block reservation to fail gracefully with -ENOSPC before
> > > > physical block allocation failure is a possibility.
> > > > 
> > > > To address this problem, introduce an in-core counter to track the
> > > > sum of all allocbt blocks in use by the filesystem. Use the new
> > > > counter to set these blocks aside at reservation time and thus
> > > > ensure they cannot be reserved until truly available. Since this is
> > > > only necessary when perag reservations are active and the counter
> > > > requires a read of each AGF to fully populate, only enforce on perag
> > > > res enabled filesystems. This allows initialization of the counter
> > > > at ->pagf_init time because the perag reservation init code reads
> > > > each AGF at mount time.
> > > > 
> > > > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > > > ---
> > > >  fs/xfs/libxfs/xfs_alloc.c       | 12 ++++++++++++
> > > >  fs/xfs/libxfs/xfs_alloc_btree.c |  2 ++
> > > >  fs/xfs/xfs_mount.c              | 18 +++++++++++++++++-
> > > >  fs/xfs/xfs_mount.h              |  6 ++++++
> > > >  4 files changed, 37 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
> > > > index 0c623d3c1036..9fa378d2724e 100644
> > > > --- a/fs/xfs/libxfs/xfs_alloc.c
> > > > +++ b/fs/xfs/libxfs/xfs_alloc.c
> > > > @@ -3036,6 +3036,7 @@ xfs_alloc_read_agf(
> > > >  	struct xfs_agf		*agf;		/* ag freelist header */
> > > >  	struct xfs_perag	*pag;		/* per allocation group data */
> > > >  	int			error;
> > > > +	uint32_t		allocbt_blks;
> > > >  
> > > >  	trace_xfs_alloc_read_agf(mp, agno);
> > > >  
> > > > @@ -3066,6 +3067,17 @@ xfs_alloc_read_agf(
> > > >  		pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level);
> > > >  		pag->pagf_init = 1;
> > > >  		pag->pagf_agflreset = xfs_agfl_needs_reset(mp, agf);
> > > > +
> > > > +		/*
> > > > +		 * Update the global in-core allocbt block counter. Filter
> > > > +		 * rmapbt blocks from the on-disk counter because those are
> > > > +		 * managed by perag reservation.
> > > > +		 */
> > > > +		if (pag->pagf_btreeblks > be32_to_cpu(agf->agf_rmap_blocks)) {
> > > > +			allocbt_blks = pag->pagf_btreeblks -
> > > > +					be32_to_cpu(agf->agf_rmap_blocks);
> > > > +			atomic64_add(allocbt_blks, &mp->m_allocbt_blks);
> > > 
> > > Does growfs call xfs_alloc_read_agf to set pagf_init in the perag
> > > structure when it adds AGs to the filesystem?  I don't /think/ that's
> > > a problem for this use case (since allocbt_blks should be 0 on a freshly
> > > initialized AG) but i was a little surprised to see block reservation
> > > bits here.
> > > 
> > 
> > I'm not sure it matters who reads AGFs as long as the global counter
> > remains consistent. For growing an existing AG, it looks like we "free"
> > the new space into the AG so I think that should be tracked accordingly
> > like any other alloc/free. For a new AG, it looks like ->agf_btreeblks
> > starts at 0 and then the perags would likely init through the perag res
> > call that runs as the final step before the growfs returns.
> 
> <nod>
> 
> > > The other thing is that online repair (what little of it is in the
> > > kernel currently) can set pagf_init = 0; is that a problem?
> > > 
> > 
> > Hmm.. if the AGF is corrupt, it seems likely that the in-core counter is
> > busted as well. We could do something like account the delta from pre
> > and post repair into the global counter, but I'd be weary of scenarios
> > where the AGF might have become inconsistent with the counter somehow
> 
> How would the allocbt_blks counter become inconsistent with the AGF?  We
> update that incore counter at the same time that we update all the other
> ondisk and pag counters, so unless the fs is shut down, we know that
> m_allocbt_blks is off by the same amount as the inconsistent AGF.
> 

Dunno, bit flip or something? Bug? I'm not going to try and predict how
things might break.

> > and the delta itself would throw it off. That might be unlikely, but
> > what scares me about that is we could actually break the global counter
> > by attempting to fix it so allocbt blocks become reservable again.
> 
> But if you don't even /try/ to fix the counter during an AGF repair,
> that almost guarantees that the decisions based on the counter will not
> be correct...
> 

It's not clear to me how to fix the counter after a single AGF repair
such that it's more likely to be correct than not. How do we currently
deal with ->m_fdblocks after a repair of a corrupted AGF? It looks to me
that we mark the summary counters sick and expect them to reinit on the
next mount based on the perag values, but I could be missing something
deeper in the scrub code. Is there a more explicit reinit or something
somewhere?

> > I'm not sure there's a great answer here. Perhaps the safest thing would
> > be to warn about an ->agf_btreeblks inconsistency that might result in
> > some number of "unusable" blocks until a mount cycle occurs and
> > resynchronizes the global counter..? That also seems to be consistent
> > with how we handle the superblock counters after an agf repair.
> 
> ...but OTOH I guess the worst that happens is that we'll ENOSPC early?
> 

Yeah. I think the characteristics of an inconsistent allocbt counter are
either that the set aside is ineffective (i.e. counter too low) or
premature -ENOSPC (counter too high).

Brian

> --D
> 
> > (All that said, I am somewhat hesitant to add to this unless we
> > determine the approach is still viable after the expected perag res
> > changes...).
> > 
> > Brian
> > 
> > > --D
> > > 
> > > > +		}
> > > >  	}
> > > >  #ifdef DEBUG
> > > >  	else if (!XFS_FORCED_SHUTDOWN(mp)) {
> > > > diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c
> > > > index 8e01231b308e..9f5a45f7baed 100644
> > > > --- a/fs/xfs/libxfs/xfs_alloc_btree.c
> > > > +++ b/fs/xfs/libxfs/xfs_alloc_btree.c
> > > > @@ -71,6 +71,7 @@ xfs_allocbt_alloc_block(
> > > >  		return 0;
> > > >  	}
> > > >  
> > > > +	atomic64_inc(&cur->bc_mp->m_allocbt_blks);
> > > >  	xfs_extent_busy_reuse(cur->bc_mp, cur->bc_ag.agno, bno, 1, false);
> > > >  
> > > >  	xfs_trans_agbtree_delta(cur->bc_tp, 1);
> > > > @@ -95,6 +96,7 @@ xfs_allocbt_free_block(
> > > >  	if (error)
> > > >  		return error;
> > > >  
> > > > +	atomic64_dec(&cur->bc_mp->m_allocbt_blks);
> > > >  	xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1,
> > > >  			      XFS_EXTENT_BUSY_SKIP_DISCARD);
> > > >  	xfs_trans_agbtree_delta(cur->bc_tp, -1);
> > > > diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
> > > > index 1c97b155a8ee..29f97e909607 100644
> > > > --- a/fs/xfs/xfs_mount.c
> > > > +++ b/fs/xfs/xfs_mount.c
> > > > @@ -1176,6 +1176,7 @@ xfs_mod_fdblocks(
> > > >  	int64_t			lcounter;
> > > >  	long long		res_used;
> > > >  	s32			batch;
> > > > +	uint64_t		set_aside;
> > > >  
> > > >  	if (delta > 0) {
> > > >  		/*
> > > > @@ -1215,8 +1216,23 @@ xfs_mod_fdblocks(
> > > >  	else
> > > >  		batch = XFS_FDBLOCKS_BATCH;
> > > >  
> > > > +	/*
> > > > +	 * Set aside allocbt blocks because these blocks are tracked as free
> > > > +	 * space but not available for allocation. Technically this means that a
> > > > +	 * single reservation cannot consume all remaining free space, but the
> > > > +	 * ratio of allocbt blocks to usable free blocks should be rather small.
> > > > +	 * The tradeoff without this is that filesystems that maintain high
> > > > +	 * perag block reservations can over reserve physical block availability
> > > > +	 * and fail physical allocation, which leads to much more serious
> > > > +	 * problems (i.e. transaction abort, pagecache discards, etc.) than
> > > > +	 * slightly premature -ENOSPC.
> > > > +	 */
> > > > +	set_aside = mp->m_alloc_set_aside;
> > > > +	if (mp->m_has_agresv)
> > > > +		set_aside += atomic64_read(&mp->m_allocbt_blks);
> > > > +
> > > >  	percpu_counter_add_batch(&mp->m_fdblocks, delta, batch);
> > > > -	if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside,
> > > > +	if (__percpu_counter_compare(&mp->m_fdblocks, set_aside,
> > > >  				     XFS_FDBLOCKS_BATCH) >= 0) {
> > > >  		/* we had space! */
> > > >  		return 0;
> > > > diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
> > > > index 489d9b2c53d9..041f437dc117 100644
> > > > --- a/fs/xfs/xfs_mount.h
> > > > +++ b/fs/xfs/xfs_mount.h
> > > > @@ -171,6 +171,12 @@ typedef struct xfs_mount {
> > > >  	 * extents or anything related to the rt device.
> > > >  	 */
> > > >  	struct percpu_counter	m_delalloc_blks;
> > > > +	/*
> > > > +	 * Global count of allocation btree blocks in use across all AGs. Only
> > > > +	 * used when perag reservation is enabled. Helps prevent block
> > > > +	 * reservation from attempting to reserve allocation btree blocks.
> > > > +	 */
> > > > +	atomic64_t		m_allocbt_blks;
> > > >  
> > > >  	struct radix_tree_root	m_perag_tree;	/* per-ag accounting info */
> > > >  	spinlock_t		m_perag_lock;	/* lock for m_perag_tree */
> > > > -- 
> > > > 2.26.2
> > > > 
> > > 
> > 
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/2] xfs: set aside allocation btree blocks from block reservation
  2021-03-18 16:17 [PATCH v3 0/2] xfs: set aside allocation btree blocks from block reservation Brian Foster
  2021-03-18 16:17 ` [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active Brian Foster
  2021-03-18 16:17 ` [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation Brian Foster
@ 2021-04-09 14:17 ` Brian Foster
  2 siblings, 0 replies; 26+ messages in thread
From: Brian Foster @ 2021-04-09 14:17 UTC (permalink / raw)
  To: linux-xfs

On Thu, Mar 18, 2021 at 12:17:05PM -0400, Brian Foster wrote:
> Hi all,
> 
> This is v3 of the allocbt block set aside fixup. The primary change in
> v3 is to filter out rmapbt blocks from the usage accounting. rmapbt
> blocks live in free space similar to allocbt blocks, but are managed
> appropriately via perag reservation and so should not be set aside from
> reservation requests.
> 
> Brian
> 
> v3:
> - Use a mount flag for easy detection of active perag reservation.
> - Filter rmapbt blocks from allocbt block accounting.
> v2: https://lore.kernel.org/linux-xfs/20210222152108.896178-1-bfoster@redhat.com/
> - Use an atomic counter instead of a percpu counter.
> v1: https://lore.kernel.org/linux-xfs/20210217132339.651020-1-bfoster@redhat.com/
> 

Ping on this series..? AFAICT there is no outstanding feedback..

Brian

> Brian Foster (2):
>   xfs: set a mount flag when perag reservation is active
>   xfs: set aside allocation btree blocks from block reservation
> 
>  fs/xfs/libxfs/xfs_ag_resv.c     | 24 ++++++++++++++----------
>  fs/xfs/libxfs/xfs_alloc.c       | 12 ++++++++++++
>  fs/xfs/libxfs/xfs_alloc_btree.c |  2 ++
>  fs/xfs/xfs_mount.c              | 18 +++++++++++++++++-
>  fs/xfs/xfs_mount.h              |  7 +++++++
>  5 files changed, 52 insertions(+), 11 deletions(-)
> 
> -- 
> 2.26.2
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation
  2021-04-21 23:40       ` Darrick J. Wong
@ 2021-04-22 12:41         ` Brian Foster
  0 siblings, 0 replies; 26+ messages in thread
From: Brian Foster @ 2021-04-22 12:41 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs

On Wed, Apr 21, 2021 at 04:40:31PM -0700, Darrick J. Wong wrote:
> On Tue, Apr 20, 2021 at 12:24:36PM -0400, Brian Foster wrote:
> > On Tue, Apr 13, 2021 at 06:00:18PM -0700, Darrick J. Wong wrote:
> > > On Mon, Apr 12, 2021 at 09:30:59AM -0400, Brian Foster wrote:
> > > > The blocks used for allocation btrees (bnobt and countbt) are
> > > > technically considered free space. This is because as free space is
> > > > used, allocbt blocks are removed and naturally become available for
> > > > traditional allocation. However, this means that a significant
> > > > portion of free space may consist of in-use btree blocks if free
> > > > space is severely fragmented.
> > > > 
> > > > On large filesystems with large perag reservations, this can lead to
> > > > a rare but nasty condition where a significant amount of physical
> > > > free space is available, but the majority of actual usable blocks
> > > > consist of in-use allocbt blocks. We have a record of a (~12TB, 32
> > > > AG) filesystem with multiple AGs in a state with ~2.5GB or so free
> > > > blocks tracked across ~300 total allocbt blocks, but effectively at
> > > > 100% full because the the free space is entirely consumed by
> > > > refcountbt perag reservation.
> > > > 
> > > > Such a large perag reservation is by design on large filesystems.
> > > > The problem is that because the free space is so fragmented, this AG
> > > > contributes the 300 or so allocbt blocks to the global counters as
> > > > free space. If this pattern repeats across enough AGs, the
> > > > filesystem lands in a state where global block reservation can
> > > > outrun physical block availability. For example, a streaming
> > > > buffered write on the affected filesystem continues to allow delayed
> > > > allocation beyond the point where writeback starts to fail due to
> > > > physical block allocation failures. The expected behavior is for the
> > > > delalloc block reservation to fail gracefully with -ENOSPC before
> > > > physical block allocation failure is a possibility.
> > > > 
> > > > To address this problem, introduce an in-core counter to track the
> > > > sum of all allocbt blocks in use by the filesystem. Use the new
> > > > counter to set these blocks aside at reservation time and thus
> > > > ensure they cannot be reserved until truly available. Since this is
> > > > only necessary when perag reservations are active and the counter
> > > > requires a read of each AGF to fully populate, only enforce on perag
> > > > res enabled filesystems. This allows initialization of the counter
> > > > at ->pagf_init time because the perag reservation init code reads
> > > > each AGF at mount time.
> > > > 
> > > > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > > > ---
> > > >  fs/xfs/libxfs/xfs_alloc.c       | 12 ++++++++++++
> > > >  fs/xfs/libxfs/xfs_alloc_btree.c |  2 ++
> > > >  fs/xfs/xfs_mount.c              | 18 +++++++++++++++++-
> > > >  fs/xfs/xfs_mount.h              |  6 ++++++
> > > >  4 files changed, 37 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
> > > > index aaa19101bb2a..144e2d68245c 100644
> > > > --- a/fs/xfs/libxfs/xfs_alloc.c
> > > > +++ b/fs/xfs/libxfs/xfs_alloc.c
> > > > @@ -3036,6 +3036,7 @@ xfs_alloc_read_agf(
> > > >  	struct xfs_agf		*agf;		/* ag freelist header */
> > > >  	struct xfs_perag	*pag;		/* per allocation group data */
> > > >  	int			error;
> > > > +	uint32_t		allocbt_blks;
> > > >  
> > > >  	trace_xfs_alloc_read_agf(mp, agno);
> > > >  
> > > > @@ -3066,6 +3067,17 @@ xfs_alloc_read_agf(
> > > >  		pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level);
> > > >  		pag->pagf_init = 1;
> > > >  		pag->pagf_agflreset = xfs_agfl_needs_reset(mp, agf);
> > > > +
> > > > +		/*
> > > > +		 * Update the global in-core allocbt block counter. Filter
> > > > +		 * rmapbt blocks from the on-disk counter because those are
> > > > +		 * managed by perag reservation.
> > > > +		 */
> > > > +		if (pag->pagf_btreeblks > be32_to_cpu(agf->agf_rmap_blocks)) {
> > > > +			allocbt_blks = pag->pagf_btreeblks -
> > > > +					be32_to_cpu(agf->agf_rmap_blocks);
> > > > +			atomic64_add(allocbt_blks, &mp->m_allocbt_blks);
> > > 
> > > This part is still bothering me.  The bug you're trying to fix is an
> > > oversight in the per-AG reservation code where I forgot to account for
> > > the fact that freespace btree blocks referencing reserved free space are
> > > themselves not available for allocation, and are thus reserved.
> > > 
> > 
> > Sort of. These blocks are not available for allocation, but they are
> > actually in use while still being accounted as free space. They are not
> > held as "reserved" at any point in the manner a perag reservation (or
> > delalloc extent) is.
> > 
> > FWIW, I don't necessarily consider this an oversight of perag
> > reservation as much as that the perag reservation can expose a problem
> > with in-use allocbt blocks being accounted as free blocks. It allows the
> > user to create conditions where the majority of unreserved && free
> > fdblocks are actually in-use allocbt blocks. The allocbt block
> > accounting is by design, however, because these blocks naturally become
> > available as extents are freed. This is essentially the gap that this
> > patch tries to close.. to preserve the higher level allocbt block
> > accounting (i.e.  remaining as free from the user perspective) without
> > allowing them to be incorrectly reserved.
> 
> <nod> That part I agree with...
> 
> > > xfs_ag_resv_init already has all the data it needs to know if we
> > > actually made a reservation, and xfs_ag_resv.c is where we put all the
> > > knowledge around how incore space reservations work.  This code adjusts
> > > an incore space reservation count, so why isn't it in xfs_ag_resv_init?
> > > 
> > 
> > ->m_allocbt_blks is just a straight counter for in-use allocbt blocks.
> > It's not a reservation of any kind. The code above simply adjusts the
> > counter based on allocbt modifications. The policy side of things is the
> > change to xfs_mod_fdblocks() that factors the allocbt count into the
> > blocks actually available for a given block reservation attempt. Based
> > on that, I'm not sure why the code would live in xfs_ag_resv_init()..?
> > I'm also not totally clear on whether you're asking about physical code
> > location vs. about a potential alternative implementation.
> 
> Just physical code location; see below.
> 
> > FWIW, I did think a little bit about using a perag reservation for
> > allocbt blocks earlier on, but IIRC it looked overcomplicated and I
> > didn't want to get into potentially changing how allocbt blocks are
> > accounted for what amounts to a corner case. If you had something else
> > in mind, can you please elaborate?
> 
> ...I really only meant to suggest moving the part that reads the AGF and
> adds (btreeblks - rmapblocks) to mp->m_alloc_blks to xfs_ag_resv_init,
> not a total redesign of the whole patch.
> 

Ok..

> > > I also think that not giving back the m_allocbt_blks diversion when we
> > > tear down the incore reservation is leaving a logic bomb for someone to
> > > trip over in the future.  I can't tell if omitting that actually results
> > > in a difference in behavior, but it still doesn't smell right...
> > > 
> > 
> > As above, ->m_allocbt_blks is just a counter. That's why it is modified
> > unconditionally. It just represents an in-core sum of what is tracked in
> > the AGF headers. The flag from patch 1 primarily exists to indicate that
> > the counter has been fully initialized by virtue of reading in all AGF
> > headers at mount time (which in turn only happens when we have a perag
> > res). I'm not sure it makes sense to artificially reset either (though
> > I think it's debatable whether the mp flag is required).
> > 
> > Brian
> 
> Something like this, maybe?
> 
> --D
> 
>  fs/xfs/libxfs/xfs_ag_resv.c     |   50 ++++++++++++++++++++++++++++++++++++++-
>  fs/xfs/libxfs/xfs_alloc_btree.c |    2 ++
>  fs/xfs/xfs_mount.c              |   18 +++++++++++++-
>  fs/xfs/xfs_mount.h              |    6 +++++
>  4 files changed, 74 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/xfs/libxfs/xfs_ag_resv.c b/fs/xfs/libxfs/xfs_ag_resv.c
> index 2bb1fdf9b349..8aa6b3d3276f 100644
> --- a/fs/xfs/libxfs/xfs_ag_resv.c
> +++ b/fs/xfs/libxfs/xfs_ag_resv.c
> @@ -159,6 +159,41 @@ __xfs_ag_resv_free(
>  	return error;
>  }
>  
> +/* Hide freespace btree blocks that might be used to hold reserved blocks. */
> +static inline int
> +xfs_ag_resv_hide_allocbt_blocks(
> +	struct xfs_perag	*pag,
> +	struct xfs_trans	*tp,
> +	bool			setup)
> +{
> +	struct xfs_mount	*mp = pag->pag_mount;
> +	xfs_agnumber_t		agno = pag->pag_agno;
> +	struct xfs_buf		*agf_bp;
> +	struct xfs_agf		*agf;
> +	s64			delta;
> +	unsigned int		rmap_blocks;
> +	int			error;
> +
> +	error = xfs_alloc_read_agf(mp, tp, agno, 0, &agf_bp);
> +	if (error)
> +		return error;
> +
> +	agf = agf_bp->b_addr;
> +	rmap_blocks = be32_to_cpu(agf->agf_rmap_blocks);
> +
> +	/*
> +	 * Update the global in-core allocbt block counter.  Filter rmapbt
> +	 * blocks from the on-disk counter because those are managed by perag
> +	 * reservation.
> +	 */
> +	delta = (s64)pag->pagf_btreeblks - rmap_blocks;
> +	if (delta > 0)
> +		atomic64_add(setup ? delta : -delta, &mp->m_allocbt_blks);
> +
> +	xfs_trans_brelse(tp, agf_bp);
> +	return 0;
> +}
> +

Hm, Ok. I don't see any technical or functional problem with doing
something like this, but this concerns me in that it sort of deviates
from the intent of ->m_allocbt_blks being pure mechanism. I.e., the act
of changing it is intended to be for the purpose of reflecting what is
on-disk, not necessarily to hide or unhide blocks. It has that effect
with this patch, but I don't see any reason to artificially prevent
something else from using the counter as such in the future. In fact if
we did do something like this, I'd probably want to at least rethink the
variable name to make it clear it's some kind of dynamic reservation as
opposed to an accounting mechanism.

With all of that in mind, the initialization is located in the perag
init code because that is clearly and explicitly implemented as a
(one-time) init of in-core counters related to the AGF. Technically that
behavior is similar to your patch to the ag_resv code, but that code
supports the ability to init/release reservation and so does introduce a
bit of an impedance mismatch. I also don't think we should ever modify
->m_allocbt_blks for any reason other than to initialize from what is
on-disk or track allocbt block allocs/frees, much for the same reason we
don't ever mutate ->m_fdblocks outside of the intended use case of
allocating/freeing blocks (this is also why I was using ->m_fdblocks as
a reference for your previous questions on how to deal with scrub). It's
not a reservation or policy of any kind, just a summarized counter of
preexisting per-ag accounting.

So all in all, this seems like unnecessary subsystem boundary crossing
to me. It also looks like about double the amount of code, fwiw. I
originally considered sending this as two patches but opted not to just
because the xfs_mod_fdblocks() change didn't seem worth an independent
patch. Based on this feedback, I'm thinking it might make more sense to
do that now. I might even split it into three if I can if for nothing
else but to more clearly document that the relationship to perag res is
actually quite tenuous (and thus is something we should probably remove
if we end up fixing the mount time full AG scan behavior as you and Dave
were discussing back on a previous version of this patch).

Brian

>  /* Free a per-AG reservation. */
>  int
>  xfs_ag_resv_free(
> @@ -167,6 +202,13 @@ xfs_ag_resv_free(
>  	int				error;
>  	int				err2;
>  
> +	if (xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_asked ||
> +	    xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_asked) {
> +		error = xfs_ag_resv_hide_allocbt_blocks(pag, NULL, false);
> +		if (error)
> +			return error;
> +	}
> +
>  	error = __xfs_ag_resv_free(pag, XFS_AG_RESV_RMAPBT);
>  	err2 = __xfs_ag_resv_free(pag, XFS_AG_RESV_METADATA);
>  	if (err2 && !error)
> @@ -322,8 +364,14 @@ xfs_ag_resv_init(
>  	       pag->pagf_freeblks + pag->pagf_flcount);
>  #endif
>  out:
> -	if (has_resv)
> +	if (has_resv) {
>  		mp->m_has_agresv = true;
> +		/* XXX probably shouldn't clobber error here */
> +		error = xfs_ag_resv_hide_allocbt_blocks(pag, tp, true);
> +		if (error)
> +			return error;
> +	}
> +
>  	return error;
>  }
>  
> diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c
> index 8e01231b308e..9f5a45f7baed 100644
> --- a/fs/xfs/libxfs/xfs_alloc_btree.c
> +++ b/fs/xfs/libxfs/xfs_alloc_btree.c
> @@ -71,6 +71,7 @@ xfs_allocbt_alloc_block(
>  		return 0;
>  	}
>  
> +	atomic64_inc(&cur->bc_mp->m_allocbt_blks);
>  	xfs_extent_busy_reuse(cur->bc_mp, cur->bc_ag.agno, bno, 1, false);
>  
>  	xfs_trans_agbtree_delta(cur->bc_tp, 1);
> @@ -95,6 +96,7 @@ xfs_allocbt_free_block(
>  	if (error)
>  		return error;
>  
> +	atomic64_dec(&cur->bc_mp->m_allocbt_blks);
>  	xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1,
>  			      XFS_EXTENT_BUSY_SKIP_DISCARD);
>  	xfs_trans_agbtree_delta(cur->bc_tp, -1);
> diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
> index cb1e2c4702c3..1f835c375a89 100644
> --- a/fs/xfs/xfs_mount.c
> +++ b/fs/xfs/xfs_mount.c
> @@ -1188,6 +1188,7 @@ xfs_mod_fdblocks(
>  	int64_t			lcounter;
>  	long long		res_used;
>  	s32			batch;
> +	uint64_t		set_aside;
>  
>  	if (delta > 0) {
>  		/*
> @@ -1227,8 +1228,23 @@ xfs_mod_fdblocks(
>  	else
>  		batch = XFS_FDBLOCKS_BATCH;
>  
> +	/*
> +	 * Set aside allocbt blocks because these blocks are tracked as free
> +	 * space but not available for allocation. Technically this means that a
> +	 * single reservation cannot consume all remaining free space, but the
> +	 * ratio of allocbt blocks to usable free blocks should be rather small.
> +	 * The tradeoff without this is that filesystems that maintain high
> +	 * perag block reservations can over reserve physical block availability
> +	 * and fail physical allocation, which leads to much more serious
> +	 * problems (i.e. transaction abort, pagecache discards, etc.) than
> +	 * slightly premature -ENOSPC.
> +	 */
> +	set_aside = mp->m_alloc_set_aside;
> +	if (mp->m_has_agresv)
> +		set_aside += atomic64_read(&mp->m_allocbt_blks);
> +
>  	percpu_counter_add_batch(&mp->m_fdblocks, delta, batch);
> -	if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside,
> +	if (__percpu_counter_compare(&mp->m_fdblocks, set_aside,
>  				     XFS_FDBLOCKS_BATCH) >= 0) {
>  		/* we had space! */
>  		return 0;
> diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
> index 8847ffd29777..80b9f37f65e6 100644
> --- a/fs/xfs/xfs_mount.h
> +++ b/fs/xfs/xfs_mount.h
> @@ -171,6 +171,12 @@ typedef struct xfs_mount {
>  	 * extents or anything related to the rt device.
>  	 */
>  	struct percpu_counter	m_delalloc_blks;
> +	/*
> +	 * Global count of allocation btree blocks in use across all AGs. Only
> +	 * used when perag reservation is enabled. Helps prevent block
> +	 * reservation from attempting to reserve allocation btree blocks.
> +	 */
> +	atomic64_t		m_allocbt_blks;
>  
>  	struct radix_tree_root	m_perag_tree;	/* per-ag accounting info */
>  	spinlock_t		m_perag_lock;	/* lock for m_perag_tree */
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation
  2021-04-20 16:24     ` Brian Foster
@ 2021-04-21 23:40       ` Darrick J. Wong
  2021-04-22 12:41         ` Brian Foster
  0 siblings, 1 reply; 26+ messages in thread
From: Darrick J. Wong @ 2021-04-21 23:40 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-xfs

On Tue, Apr 20, 2021 at 12:24:36PM -0400, Brian Foster wrote:
> On Tue, Apr 13, 2021 at 06:00:18PM -0700, Darrick J. Wong wrote:
> > On Mon, Apr 12, 2021 at 09:30:59AM -0400, Brian Foster wrote:
> > > The blocks used for allocation btrees (bnobt and countbt) are
> > > technically considered free space. This is because as free space is
> > > used, allocbt blocks are removed and naturally become available for
> > > traditional allocation. However, this means that a significant
> > > portion of free space may consist of in-use btree blocks if free
> > > space is severely fragmented.
> > > 
> > > On large filesystems with large perag reservations, this can lead to
> > > a rare but nasty condition where a significant amount of physical
> > > free space is available, but the majority of actual usable blocks
> > > consist of in-use allocbt blocks. We have a record of a (~12TB, 32
> > > AG) filesystem with multiple AGs in a state with ~2.5GB or so free
> > > blocks tracked across ~300 total allocbt blocks, but effectively at
> > > 100% full because the the free space is entirely consumed by
> > > refcountbt perag reservation.
> > > 
> > > Such a large perag reservation is by design on large filesystems.
> > > The problem is that because the free space is so fragmented, this AG
> > > contributes the 300 or so allocbt blocks to the global counters as
> > > free space. If this pattern repeats across enough AGs, the
> > > filesystem lands in a state where global block reservation can
> > > outrun physical block availability. For example, a streaming
> > > buffered write on the affected filesystem continues to allow delayed
> > > allocation beyond the point where writeback starts to fail due to
> > > physical block allocation failures. The expected behavior is for the
> > > delalloc block reservation to fail gracefully with -ENOSPC before
> > > physical block allocation failure is a possibility.
> > > 
> > > To address this problem, introduce an in-core counter to track the
> > > sum of all allocbt blocks in use by the filesystem. Use the new
> > > counter to set these blocks aside at reservation time and thus
> > > ensure they cannot be reserved until truly available. Since this is
> > > only necessary when perag reservations are active and the counter
> > > requires a read of each AGF to fully populate, only enforce on perag
> > > res enabled filesystems. This allows initialization of the counter
> > > at ->pagf_init time because the perag reservation init code reads
> > > each AGF at mount time.
> > > 
> > > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > > ---
> > >  fs/xfs/libxfs/xfs_alloc.c       | 12 ++++++++++++
> > >  fs/xfs/libxfs/xfs_alloc_btree.c |  2 ++
> > >  fs/xfs/xfs_mount.c              | 18 +++++++++++++++++-
> > >  fs/xfs/xfs_mount.h              |  6 ++++++
> > >  4 files changed, 37 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
> > > index aaa19101bb2a..144e2d68245c 100644
> > > --- a/fs/xfs/libxfs/xfs_alloc.c
> > > +++ b/fs/xfs/libxfs/xfs_alloc.c
> > > @@ -3036,6 +3036,7 @@ xfs_alloc_read_agf(
> > >  	struct xfs_agf		*agf;		/* ag freelist header */
> > >  	struct xfs_perag	*pag;		/* per allocation group data */
> > >  	int			error;
> > > +	uint32_t		allocbt_blks;
> > >  
> > >  	trace_xfs_alloc_read_agf(mp, agno);
> > >  
> > > @@ -3066,6 +3067,17 @@ xfs_alloc_read_agf(
> > >  		pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level);
> > >  		pag->pagf_init = 1;
> > >  		pag->pagf_agflreset = xfs_agfl_needs_reset(mp, agf);
> > > +
> > > +		/*
> > > +		 * Update the global in-core allocbt block counter. Filter
> > > +		 * rmapbt blocks from the on-disk counter because those are
> > > +		 * managed by perag reservation.
> > > +		 */
> > > +		if (pag->pagf_btreeblks > be32_to_cpu(agf->agf_rmap_blocks)) {
> > > +			allocbt_blks = pag->pagf_btreeblks -
> > > +					be32_to_cpu(agf->agf_rmap_blocks);
> > > +			atomic64_add(allocbt_blks, &mp->m_allocbt_blks);
> > 
> > This part is still bothering me.  The bug you're trying to fix is an
> > oversight in the per-AG reservation code where I forgot to account for
> > the fact that freespace btree blocks referencing reserved free space are
> > themselves not available for allocation, and are thus reserved.
> > 
> 
> Sort of. These blocks are not available for allocation, but they are
> actually in use while still being accounted as free space. They are not
> held as "reserved" at any point in the manner a perag reservation (or
> delalloc extent) is.
> 
> FWIW, I don't necessarily consider this an oversight of perag
> reservation as much as that the perag reservation can expose a problem
> with in-use allocbt blocks being accounted as free blocks. It allows the
> user to create conditions where the majority of unreserved && free
> fdblocks are actually in-use allocbt blocks. The allocbt block
> accounting is by design, however, because these blocks naturally become
> available as extents are freed. This is essentially the gap that this
> patch tries to close.. to preserve the higher level allocbt block
> accounting (i.e.  remaining as free from the user perspective) without
> allowing them to be incorrectly reserved.

<nod> That part I agree with...

> > xfs_ag_resv_init already has all the data it needs to know if we
> > actually made a reservation, and xfs_ag_resv.c is where we put all the
> > knowledge around how incore space reservations work.  This code adjusts
> > an incore space reservation count, so why isn't it in xfs_ag_resv_init?
> > 
> 
> ->m_allocbt_blks is just a straight counter for in-use allocbt blocks.
> It's not a reservation of any kind. The code above simply adjusts the
> counter based on allocbt modifications. The policy side of things is the
> change to xfs_mod_fdblocks() that factors the allocbt count into the
> blocks actually available for a given block reservation attempt. Based
> on that, I'm not sure why the code would live in xfs_ag_resv_init()..?
> I'm also not totally clear on whether you're asking about physical code
> location vs. about a potential alternative implementation.

Just physical code location; see below.

> FWIW, I did think a little bit about using a perag reservation for
> allocbt blocks earlier on, but IIRC it looked overcomplicated and I
> didn't want to get into potentially changing how allocbt blocks are
> accounted for what amounts to a corner case. If you had something else
> in mind, can you please elaborate?

...I really only meant to suggest moving the part that reads the AGF and
adds (btreeblks - rmapblocks) to mp->m_alloc_blks to xfs_ag_resv_init,
not a total redesign of the whole patch.

> > I also think that not giving back the m_allocbt_blks diversion when we
> > tear down the incore reservation is leaving a logic bomb for someone to
> > trip over in the future.  I can't tell if omitting that actually results
> > in a difference in behavior, but it still doesn't smell right...
> > 
> 
> As above, ->m_allocbt_blks is just a counter. That's why it is modified
> unconditionally. It just represents an in-core sum of what is tracked in
> the AGF headers. The flag from patch 1 primarily exists to indicate that
> the counter has been fully initialized by virtue of reading in all AGF
> headers at mount time (which in turn only happens when we have a perag
> res). I'm not sure it makes sense to artificially reset either (though
> I think it's debatable whether the mp flag is required).
> 
> Brian

Something like this, maybe?

--D

 fs/xfs/libxfs/xfs_ag_resv.c     |   50 ++++++++++++++++++++++++++++++++++++++-
 fs/xfs/libxfs/xfs_alloc_btree.c |    2 ++
 fs/xfs/xfs_mount.c              |   18 +++++++++++++-
 fs/xfs/xfs_mount.h              |    6 +++++
 4 files changed, 74 insertions(+), 2 deletions(-)

diff --git a/fs/xfs/libxfs/xfs_ag_resv.c b/fs/xfs/libxfs/xfs_ag_resv.c
index 2bb1fdf9b349..8aa6b3d3276f 100644
--- a/fs/xfs/libxfs/xfs_ag_resv.c
+++ b/fs/xfs/libxfs/xfs_ag_resv.c
@@ -159,6 +159,41 @@ __xfs_ag_resv_free(
 	return error;
 }
 
+/* Hide freespace btree blocks that might be used to hold reserved blocks. */
+static inline int
+xfs_ag_resv_hide_allocbt_blocks(
+	struct xfs_perag	*pag,
+	struct xfs_trans	*tp,
+	bool			setup)
+{
+	struct xfs_mount	*mp = pag->pag_mount;
+	xfs_agnumber_t		agno = pag->pag_agno;
+	struct xfs_buf		*agf_bp;
+	struct xfs_agf		*agf;
+	s64			delta;
+	unsigned int		rmap_blocks;
+	int			error;
+
+	error = xfs_alloc_read_agf(mp, tp, agno, 0, &agf_bp);
+	if (error)
+		return error;
+
+	agf = agf_bp->b_addr;
+	rmap_blocks = be32_to_cpu(agf->agf_rmap_blocks);
+
+	/*
+	 * Update the global in-core allocbt block counter.  Filter rmapbt
+	 * blocks from the on-disk counter because those are managed by perag
+	 * reservation.
+	 */
+	delta = (s64)pag->pagf_btreeblks - rmap_blocks;
+	if (delta > 0)
+		atomic64_add(setup ? delta : -delta, &mp->m_allocbt_blks);
+
+	xfs_trans_brelse(tp, agf_bp);
+	return 0;
+}
+
 /* Free a per-AG reservation. */
 int
 xfs_ag_resv_free(
@@ -167,6 +202,13 @@ xfs_ag_resv_free(
 	int				error;
 	int				err2;
 
+	if (xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_asked ||
+	    xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_asked) {
+		error = xfs_ag_resv_hide_allocbt_blocks(pag, NULL, false);
+		if (error)
+			return error;
+	}
+
 	error = __xfs_ag_resv_free(pag, XFS_AG_RESV_RMAPBT);
 	err2 = __xfs_ag_resv_free(pag, XFS_AG_RESV_METADATA);
 	if (err2 && !error)
@@ -322,8 +364,14 @@ xfs_ag_resv_init(
 	       pag->pagf_freeblks + pag->pagf_flcount);
 #endif
 out:
-	if (has_resv)
+	if (has_resv) {
 		mp->m_has_agresv = true;
+		/* XXX probably shouldn't clobber error here */
+		error = xfs_ag_resv_hide_allocbt_blocks(pag, tp, true);
+		if (error)
+			return error;
+	}
+
 	return error;
 }
 
diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c
index 8e01231b308e..9f5a45f7baed 100644
--- a/fs/xfs/libxfs/xfs_alloc_btree.c
+++ b/fs/xfs/libxfs/xfs_alloc_btree.c
@@ -71,6 +71,7 @@ xfs_allocbt_alloc_block(
 		return 0;
 	}
 
+	atomic64_inc(&cur->bc_mp->m_allocbt_blks);
 	xfs_extent_busy_reuse(cur->bc_mp, cur->bc_ag.agno, bno, 1, false);
 
 	xfs_trans_agbtree_delta(cur->bc_tp, 1);
@@ -95,6 +96,7 @@ xfs_allocbt_free_block(
 	if (error)
 		return error;
 
+	atomic64_dec(&cur->bc_mp->m_allocbt_blks);
 	xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1,
 			      XFS_EXTENT_BUSY_SKIP_DISCARD);
 	xfs_trans_agbtree_delta(cur->bc_tp, -1);
diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
index cb1e2c4702c3..1f835c375a89 100644
--- a/fs/xfs/xfs_mount.c
+++ b/fs/xfs/xfs_mount.c
@@ -1188,6 +1188,7 @@ xfs_mod_fdblocks(
 	int64_t			lcounter;
 	long long		res_used;
 	s32			batch;
+	uint64_t		set_aside;
 
 	if (delta > 0) {
 		/*
@@ -1227,8 +1228,23 @@ xfs_mod_fdblocks(
 	else
 		batch = XFS_FDBLOCKS_BATCH;
 
+	/*
+	 * Set aside allocbt blocks because these blocks are tracked as free
+	 * space but not available for allocation. Technically this means that a
+	 * single reservation cannot consume all remaining free space, but the
+	 * ratio of allocbt blocks to usable free blocks should be rather small.
+	 * The tradeoff without this is that filesystems that maintain high
+	 * perag block reservations can over reserve physical block availability
+	 * and fail physical allocation, which leads to much more serious
+	 * problems (i.e. transaction abort, pagecache discards, etc.) than
+	 * slightly premature -ENOSPC.
+	 */
+	set_aside = mp->m_alloc_set_aside;
+	if (mp->m_has_agresv)
+		set_aside += atomic64_read(&mp->m_allocbt_blks);
+
 	percpu_counter_add_batch(&mp->m_fdblocks, delta, batch);
-	if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside,
+	if (__percpu_counter_compare(&mp->m_fdblocks, set_aside,
 				     XFS_FDBLOCKS_BATCH) >= 0) {
 		/* we had space! */
 		return 0;
diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
index 8847ffd29777..80b9f37f65e6 100644
--- a/fs/xfs/xfs_mount.h
+++ b/fs/xfs/xfs_mount.h
@@ -171,6 +171,12 @@ typedef struct xfs_mount {
 	 * extents or anything related to the rt device.
 	 */
 	struct percpu_counter	m_delalloc_blks;
+	/*
+	 * Global count of allocation btree blocks in use across all AGs. Only
+	 * used when perag reservation is enabled. Helps prevent block
+	 * reservation from attempting to reserve allocation btree blocks.
+	 */
+	atomic64_t		m_allocbt_blks;
 
 	struct radix_tree_root	m_perag_tree;	/* per-ag accounting info */
 	spinlock_t		m_perag_lock;	/* lock for m_perag_tree */

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation
  2021-04-14  1:00   ` Darrick J. Wong
@ 2021-04-20 16:24     ` Brian Foster
  2021-04-21 23:40       ` Darrick J. Wong
  0 siblings, 1 reply; 26+ messages in thread
From: Brian Foster @ 2021-04-20 16:24 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs

On Tue, Apr 13, 2021 at 06:00:18PM -0700, Darrick J. Wong wrote:
> On Mon, Apr 12, 2021 at 09:30:59AM -0400, Brian Foster wrote:
> > The blocks used for allocation btrees (bnobt and countbt) are
> > technically considered free space. This is because as free space is
> > used, allocbt blocks are removed and naturally become available for
> > traditional allocation. However, this means that a significant
> > portion of free space may consist of in-use btree blocks if free
> > space is severely fragmented.
> > 
> > On large filesystems with large perag reservations, this can lead to
> > a rare but nasty condition where a significant amount of physical
> > free space is available, but the majority of actual usable blocks
> > consist of in-use allocbt blocks. We have a record of a (~12TB, 32
> > AG) filesystem with multiple AGs in a state with ~2.5GB or so free
> > blocks tracked across ~300 total allocbt blocks, but effectively at
> > 100% full because the the free space is entirely consumed by
> > refcountbt perag reservation.
> > 
> > Such a large perag reservation is by design on large filesystems.
> > The problem is that because the free space is so fragmented, this AG
> > contributes the 300 or so allocbt blocks to the global counters as
> > free space. If this pattern repeats across enough AGs, the
> > filesystem lands in a state where global block reservation can
> > outrun physical block availability. For example, a streaming
> > buffered write on the affected filesystem continues to allow delayed
> > allocation beyond the point where writeback starts to fail due to
> > physical block allocation failures. The expected behavior is for the
> > delalloc block reservation to fail gracefully with -ENOSPC before
> > physical block allocation failure is a possibility.
> > 
> > To address this problem, introduce an in-core counter to track the
> > sum of all allocbt blocks in use by the filesystem. Use the new
> > counter to set these blocks aside at reservation time and thus
> > ensure they cannot be reserved until truly available. Since this is
> > only necessary when perag reservations are active and the counter
> > requires a read of each AGF to fully populate, only enforce on perag
> > res enabled filesystems. This allows initialization of the counter
> > at ->pagf_init time because the perag reservation init code reads
> > each AGF at mount time.
> > 
> > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > ---
> >  fs/xfs/libxfs/xfs_alloc.c       | 12 ++++++++++++
> >  fs/xfs/libxfs/xfs_alloc_btree.c |  2 ++
> >  fs/xfs/xfs_mount.c              | 18 +++++++++++++++++-
> >  fs/xfs/xfs_mount.h              |  6 ++++++
> >  4 files changed, 37 insertions(+), 1 deletion(-)
> > 
> > diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
> > index aaa19101bb2a..144e2d68245c 100644
> > --- a/fs/xfs/libxfs/xfs_alloc.c
> > +++ b/fs/xfs/libxfs/xfs_alloc.c
> > @@ -3036,6 +3036,7 @@ xfs_alloc_read_agf(
> >  	struct xfs_agf		*agf;		/* ag freelist header */
> >  	struct xfs_perag	*pag;		/* per allocation group data */
> >  	int			error;
> > +	uint32_t		allocbt_blks;
> >  
> >  	trace_xfs_alloc_read_agf(mp, agno);
> >  
> > @@ -3066,6 +3067,17 @@ xfs_alloc_read_agf(
> >  		pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level);
> >  		pag->pagf_init = 1;
> >  		pag->pagf_agflreset = xfs_agfl_needs_reset(mp, agf);
> > +
> > +		/*
> > +		 * Update the global in-core allocbt block counter. Filter
> > +		 * rmapbt blocks from the on-disk counter because those are
> > +		 * managed by perag reservation.
> > +		 */
> > +		if (pag->pagf_btreeblks > be32_to_cpu(agf->agf_rmap_blocks)) {
> > +			allocbt_blks = pag->pagf_btreeblks -
> > +					be32_to_cpu(agf->agf_rmap_blocks);
> > +			atomic64_add(allocbt_blks, &mp->m_allocbt_blks);
> 
> This part is still bothering me.  The bug you're trying to fix is an
> oversight in the per-AG reservation code where I forgot to account for
> the fact that freespace btree blocks referencing reserved free space are
> themselves not available for allocation, and are thus reserved.
> 

Sort of. These blocks are not available for allocation, but they are
actually in use while still being accounted as free space. They are not
held as "reserved" at any point in the manner a perag reservation (or
delalloc extent) is.

FWIW, I don't necessarily consider this an oversight of perag
reservation as much as that the perag reservation can expose a problem
with in-use allocbt blocks being accounted as free blocks. It allows the
user to create conditions where the majority of unreserved && free
fdblocks are actually in-use allocbt blocks. The allocbt block
accounting is by design, however, because these blocks naturally become
available as extents are freed. This is essentially the gap that this
patch tries to close.. to preserve the higher level allocbt block
accounting (i.e.  remaining as free from the user perspective) without
allowing them to be incorrectly reserved.

> xfs_ag_resv_init already has all the data it needs to know if we
> actually made a reservation, and xfs_ag_resv.c is where we put all the
> knowledge around how incore space reservations work.  This code adjusts
> an incore space reservation count, so why isn't it in xfs_ag_resv_init?
> 

->m_allocbt_blks is just a straight counter for in-use allocbt blocks.
It's not a reservation of any kind. The code above simply adjusts the
counter based on allocbt modifications. The policy side of things is the
change to xfs_mod_fdblocks() that factors the allocbt count into the
blocks actually available for a given block reservation attempt. Based
on that, I'm not sure why the code would live in xfs_ag_resv_init()..?
I'm also not totally clear on whether you're asking about physical code
location vs. about a potential alternative implementation.

FWIW, I did think a little bit about using a perag reservation for
allocbt blocks earlier on, but IIRC it looked overcomplicated and I
didn't want to get into potentially changing how allocbt blocks are
accounted for what amounts to a corner case. If you had something else
in mind, can you please elaborate?

> I also think that not giving back the m_allocbt_blks diversion when we
> tear down the incore reservation is leaving a logic bomb for someone to
> trip over in the future.  I can't tell if omitting that actually results
> in a difference in behavior, but it still doesn't smell right...
> 

As above, ->m_allocbt_blks is just a counter. That's why it is modified
unconditionally. It just represents an in-core sum of what is tracked in
the AGF headers. The flag from patch 1 primarily exists to indicate that
the counter has been fully initialized by virtue of reading in all AGF
headers at mount time (which in turn only happens when we have a perag
res). I'm not sure it makes sense to artificially reset either (though
I think it's debatable whether the mp flag is required).

Brian

> --D
> 
> > +		}
> >  	}
> >  #ifdef DEBUG
> >  	else if (!XFS_FORCED_SHUTDOWN(mp)) {
> > diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c
> > index 8e01231b308e..9f5a45f7baed 100644
> > --- a/fs/xfs/libxfs/xfs_alloc_btree.c
> > +++ b/fs/xfs/libxfs/xfs_alloc_btree.c
> > @@ -71,6 +71,7 @@ xfs_allocbt_alloc_block(
> >  		return 0;
> >  	}
> >  
> > +	atomic64_inc(&cur->bc_mp->m_allocbt_blks);
> >  	xfs_extent_busy_reuse(cur->bc_mp, cur->bc_ag.agno, bno, 1, false);
> >  
> >  	xfs_trans_agbtree_delta(cur->bc_tp, 1);
> > @@ -95,6 +96,7 @@ xfs_allocbt_free_block(
> >  	if (error)
> >  		return error;
> >  
> > +	atomic64_dec(&cur->bc_mp->m_allocbt_blks);
> >  	xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1,
> >  			      XFS_EXTENT_BUSY_SKIP_DISCARD);
> >  	xfs_trans_agbtree_delta(cur->bc_tp, -1);
> > diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
> > index cb1e2c4702c3..1f835c375a89 100644
> > --- a/fs/xfs/xfs_mount.c
> > +++ b/fs/xfs/xfs_mount.c
> > @@ -1188,6 +1188,7 @@ xfs_mod_fdblocks(
> >  	int64_t			lcounter;
> >  	long long		res_used;
> >  	s32			batch;
> > +	uint64_t		set_aside;
> >  
> >  	if (delta > 0) {
> >  		/*
> > @@ -1227,8 +1228,23 @@ xfs_mod_fdblocks(
> >  	else
> >  		batch = XFS_FDBLOCKS_BATCH;
> >  
> > +	/*
> > +	 * Set aside allocbt blocks because these blocks are tracked as free
> > +	 * space but not available for allocation. Technically this means that a
> > +	 * single reservation cannot consume all remaining free space, but the
> > +	 * ratio of allocbt blocks to usable free blocks should be rather small.
> > +	 * The tradeoff without this is that filesystems that maintain high
> > +	 * perag block reservations can over reserve physical block availability
> > +	 * and fail physical allocation, which leads to much more serious
> > +	 * problems (i.e. transaction abort, pagecache discards, etc.) than
> > +	 * slightly premature -ENOSPC.
> > +	 */
> > +	set_aside = mp->m_alloc_set_aside;
> > +	if (mp->m_has_agresv)
> > +		set_aside += atomic64_read(&mp->m_allocbt_blks);
> > +
> >  	percpu_counter_add_batch(&mp->m_fdblocks, delta, batch);
> > -	if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside,
> > +	if (__percpu_counter_compare(&mp->m_fdblocks, set_aside,
> >  				     XFS_FDBLOCKS_BATCH) >= 0) {
> >  		/* we had space! */
> >  		return 0;
> > diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
> > index 8847ffd29777..80b9f37f65e6 100644
> > --- a/fs/xfs/xfs_mount.h
> > +++ b/fs/xfs/xfs_mount.h
> > @@ -171,6 +171,12 @@ typedef struct xfs_mount {
> >  	 * extents or anything related to the rt device.
> >  	 */
> >  	struct percpu_counter	m_delalloc_blks;
> > +	/*
> > +	 * Global count of allocation btree blocks in use across all AGs. Only
> > +	 * used when perag reservation is enabled. Helps prevent block
> > +	 * reservation from attempting to reserve allocation btree blocks.
> > +	 */
> > +	atomic64_t		m_allocbt_blks;
> >  
> >  	struct radix_tree_root	m_perag_tree;	/* per-ag accounting info */
> >  	spinlock_t		m_perag_lock;	/* lock for m_perag_tree */
> > -- 
> > 2.26.3
> > 
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation
  2021-04-12 13:30 ` [PATCH v3 2/2] " Brian Foster
  2021-04-12 22:21   ` kernel test robot
@ 2021-04-14  1:00   ` Darrick J. Wong
  2021-04-20 16:24     ` Brian Foster
  1 sibling, 1 reply; 26+ messages in thread
From: Darrick J. Wong @ 2021-04-14  1:00 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-xfs

On Mon, Apr 12, 2021 at 09:30:59AM -0400, Brian Foster wrote:
> The blocks used for allocation btrees (bnobt and countbt) are
> technically considered free space. This is because as free space is
> used, allocbt blocks are removed and naturally become available for
> traditional allocation. However, this means that a significant
> portion of free space may consist of in-use btree blocks if free
> space is severely fragmented.
> 
> On large filesystems with large perag reservations, this can lead to
> a rare but nasty condition where a significant amount of physical
> free space is available, but the majority of actual usable blocks
> consist of in-use allocbt blocks. We have a record of a (~12TB, 32
> AG) filesystem with multiple AGs in a state with ~2.5GB or so free
> blocks tracked across ~300 total allocbt blocks, but effectively at
> 100% full because the the free space is entirely consumed by
> refcountbt perag reservation.
> 
> Such a large perag reservation is by design on large filesystems.
> The problem is that because the free space is so fragmented, this AG
> contributes the 300 or so allocbt blocks to the global counters as
> free space. If this pattern repeats across enough AGs, the
> filesystem lands in a state where global block reservation can
> outrun physical block availability. For example, a streaming
> buffered write on the affected filesystem continues to allow delayed
> allocation beyond the point where writeback starts to fail due to
> physical block allocation failures. The expected behavior is for the
> delalloc block reservation to fail gracefully with -ENOSPC before
> physical block allocation failure is a possibility.
> 
> To address this problem, introduce an in-core counter to track the
> sum of all allocbt blocks in use by the filesystem. Use the new
> counter to set these blocks aside at reservation time and thus
> ensure they cannot be reserved until truly available. Since this is
> only necessary when perag reservations are active and the counter
> requires a read of each AGF to fully populate, only enforce on perag
> res enabled filesystems. This allows initialization of the counter
> at ->pagf_init time because the perag reservation init code reads
> each AGF at mount time.
> 
> Signed-off-by: Brian Foster <bfoster@redhat.com>
> ---
>  fs/xfs/libxfs/xfs_alloc.c       | 12 ++++++++++++
>  fs/xfs/libxfs/xfs_alloc_btree.c |  2 ++
>  fs/xfs/xfs_mount.c              | 18 +++++++++++++++++-
>  fs/xfs/xfs_mount.h              |  6 ++++++
>  4 files changed, 37 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
> index aaa19101bb2a..144e2d68245c 100644
> --- a/fs/xfs/libxfs/xfs_alloc.c
> +++ b/fs/xfs/libxfs/xfs_alloc.c
> @@ -3036,6 +3036,7 @@ xfs_alloc_read_agf(
>  	struct xfs_agf		*agf;		/* ag freelist header */
>  	struct xfs_perag	*pag;		/* per allocation group data */
>  	int			error;
> +	uint32_t		allocbt_blks;
>  
>  	trace_xfs_alloc_read_agf(mp, agno);
>  
> @@ -3066,6 +3067,17 @@ xfs_alloc_read_agf(
>  		pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level);
>  		pag->pagf_init = 1;
>  		pag->pagf_agflreset = xfs_agfl_needs_reset(mp, agf);
> +
> +		/*
> +		 * Update the global in-core allocbt block counter. Filter
> +		 * rmapbt blocks from the on-disk counter because those are
> +		 * managed by perag reservation.
> +		 */
> +		if (pag->pagf_btreeblks > be32_to_cpu(agf->agf_rmap_blocks)) {
> +			allocbt_blks = pag->pagf_btreeblks -
> +					be32_to_cpu(agf->agf_rmap_blocks);
> +			atomic64_add(allocbt_blks, &mp->m_allocbt_blks);

This part is still bothering me.  The bug you're trying to fix is an
oversight in the per-AG reservation code where I forgot to account for
the fact that freespace btree blocks referencing reserved free space are
themselves not available for allocation, and are thus reserved.

xfs_ag_resv_init already has all the data it needs to know if we
actually made a reservation, and xfs_ag_resv.c is where we put all the
knowledge around how incore space reservations work.  This code adjusts
an incore space reservation count, so why isn't it in xfs_ag_resv_init?

I also think that not giving back the m_allocbt_blks diversion when we
tear down the incore reservation is leaving a logic bomb for someone to
trip over in the future.  I can't tell if omitting that actually results
in a difference in behavior, but it still doesn't smell right...

--D

> +		}
>  	}
>  #ifdef DEBUG
>  	else if (!XFS_FORCED_SHUTDOWN(mp)) {
> diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c
> index 8e01231b308e..9f5a45f7baed 100644
> --- a/fs/xfs/libxfs/xfs_alloc_btree.c
> +++ b/fs/xfs/libxfs/xfs_alloc_btree.c
> @@ -71,6 +71,7 @@ xfs_allocbt_alloc_block(
>  		return 0;
>  	}
>  
> +	atomic64_inc(&cur->bc_mp->m_allocbt_blks);
>  	xfs_extent_busy_reuse(cur->bc_mp, cur->bc_ag.agno, bno, 1, false);
>  
>  	xfs_trans_agbtree_delta(cur->bc_tp, 1);
> @@ -95,6 +96,7 @@ xfs_allocbt_free_block(
>  	if (error)
>  		return error;
>  
> +	atomic64_dec(&cur->bc_mp->m_allocbt_blks);
>  	xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1,
>  			      XFS_EXTENT_BUSY_SKIP_DISCARD);
>  	xfs_trans_agbtree_delta(cur->bc_tp, -1);
> diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
> index cb1e2c4702c3..1f835c375a89 100644
> --- a/fs/xfs/xfs_mount.c
> +++ b/fs/xfs/xfs_mount.c
> @@ -1188,6 +1188,7 @@ xfs_mod_fdblocks(
>  	int64_t			lcounter;
>  	long long		res_used;
>  	s32			batch;
> +	uint64_t		set_aside;
>  
>  	if (delta > 0) {
>  		/*
> @@ -1227,8 +1228,23 @@ xfs_mod_fdblocks(
>  	else
>  		batch = XFS_FDBLOCKS_BATCH;
>  
> +	/*
> +	 * Set aside allocbt blocks because these blocks are tracked as free
> +	 * space but not available for allocation. Technically this means that a
> +	 * single reservation cannot consume all remaining free space, but the
> +	 * ratio of allocbt blocks to usable free blocks should be rather small.
> +	 * The tradeoff without this is that filesystems that maintain high
> +	 * perag block reservations can over reserve physical block availability
> +	 * and fail physical allocation, which leads to much more serious
> +	 * problems (i.e. transaction abort, pagecache discards, etc.) than
> +	 * slightly premature -ENOSPC.
> +	 */
> +	set_aside = mp->m_alloc_set_aside;
> +	if (mp->m_has_agresv)
> +		set_aside += atomic64_read(&mp->m_allocbt_blks);
> +
>  	percpu_counter_add_batch(&mp->m_fdblocks, delta, batch);
> -	if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside,
> +	if (__percpu_counter_compare(&mp->m_fdblocks, set_aside,
>  				     XFS_FDBLOCKS_BATCH) >= 0) {
>  		/* we had space! */
>  		return 0;
> diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
> index 8847ffd29777..80b9f37f65e6 100644
> --- a/fs/xfs/xfs_mount.h
> +++ b/fs/xfs/xfs_mount.h
> @@ -171,6 +171,12 @@ typedef struct xfs_mount {
>  	 * extents or anything related to the rt device.
>  	 */
>  	struct percpu_counter	m_delalloc_blks;
> +	/*
> +	 * Global count of allocation btree blocks in use across all AGs. Only
> +	 * used when perag reservation is enabled. Helps prevent block
> +	 * reservation from attempting to reserve allocation btree blocks.
> +	 */
> +	atomic64_t		m_allocbt_blks;
>  
>  	struct radix_tree_root	m_perag_tree;	/* per-ag accounting info */
>  	spinlock_t		m_perag_lock;	/* lock for m_perag_tree */
> -- 
> 2.26.3
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation
  2021-04-12 22:21   ` kernel test robot
@ 2021-04-14  0:41     ` Darrick J. Wong
  0 siblings, 0 replies; 26+ messages in thread
From: Darrick J. Wong @ 2021-04-14  0:41 UTC (permalink / raw)
  To: kernel test robot; +Cc: Brian Foster, linux-xfs, kbuild-all

On Tue, Apr 13, 2021 at 06:21:56AM +0800, kernel test robot wrote:
> Hi Brian,
> 
> Thank you for the patch! Yet something to improve:
> 
> [auto build test ERROR on xfs-linux/for-next]
> [also build test ERROR on v5.12-rc7 next-20210412]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
> 
> url:    https://github.com/0day-ci/linux/commits/Brian-Foster/xfs-set-aside-allocation-btree-blocks-from-block-reservation/20210412-213222
> base:   https://git.kernel.org/pub/scm/fs/xfs/xfs-linux.git for-next
> config: um-randconfig-r022-20210412 (attached as .config)
> compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
> reproduce (this is a W=1 build):
>         # https://github.com/0day-ci/linux/commit/5ffa1f5fa63a4a9c557f90beb5826866fa4aefd0
>         git remote add linux-review https://github.com/0day-ci/linux
>         git fetch --no-tags linux-review Brian-Foster/xfs-set-aside-allocation-btree-blocks-from-block-reservation/20210412-213222
>         git checkout 5ffa1f5fa63a4a9c557f90beb5826866fa4aefd0
>         # save the attached .config to linux build tree
>         make W=1 ARCH=um 
> 
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
> 
> All errors (new ones prefixed by >>):
> 
>    syscall.c:(.text+0xa023): undefined reference to `atomic64_inc_386'

What is all this build robot noise I'm suddenly getting about UML and
weird versions of atomic functions?  Seems to build fine on vanilla x64
and arm64, so....is there a real problem here???

--D

>    /usr/bin/ld: kernel/bpf/syscall.o: in function `bpf_link_put':
>    syscall.c:(.text+0xa036): undefined reference to `atomic64_dec_return_386'
>    /usr/bin/ld: kernel/bpf/syscall.o: in function `bpf_tracing_prog_attach':
>    syscall.c:(.text+0xa423): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: kernel/bpf/syscall.o: in function `bpf_link_get_from_fd':
>    syscall.c:(.text+0xa8a7): undefined reference to `atomic64_inc_386'
>    /usr/bin/ld: kernel/bpf/bpf_iter.o: in function `prepare_seq_file':
>    bpf_iter.c:(.text+0x1b2): undefined reference to `atomic64_inc_return_386'
>    /usr/bin/ld: fs/proc/task_mmu.o: in function `task_mem':
>    task_mmu.c:(.text+0x2bfd): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/ext4/balloc.o: in function `ext4_has_free_clusters':
>    balloc.c:(.text+0x94): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/ext4/dir.o: in function `ext4_dir_llseek':
>    dir.c:(.text+0x2d3): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/ext4/dir.o: in function `ext4_readdir':
>    dir.c:(.text+0x84b): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: dir.c:(.text+0xc71): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: dir.c:(.text+0xc9f): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: dir.c:(.text+0xe1b): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: dir.c:(.text+0xe44): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: dir.c:(.text+0xe72): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/ext4/ialloc.o: in function `get_orlov_stats':
>    ialloc.c:(.text+0x205): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/ext4/inline.o: in function `ext4_add_dirent_to_inline.isra.0':
>    inline.c:(.text+0x14a9): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: inline.c:(.text+0x14df): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/ext4/inline.o: in function `ext4_read_inline_dir':
>    inline.c:(.text+0x3dd8): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: inline.c:(.text+0x3ea7): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: inline.c:(.text+0x3ed5): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/ext4/inode.o: in function `ext4_do_update_inode':
>    inode.c:(.text+0x3f13): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: inode.c:(.text+0x44bb): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/ext4/inode.o: in function `__ext4_iget':
>    inode.c:(.text+0x7c4d): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: inode.c:(.text+0x8385): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: fs/ext4/inode.o: in function `ext4_mark_iloc_dirty':
>    inode.c:(.text+0x8a69): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: inode.c:(.text+0x8aa0): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/ext4/inode.o: in function `ext4_setattr':
>    inode.c:(.text+0xf4ed): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: inode.c:(.text+0xf523): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/ext4/ioctl.o: in function `swap_inode_boot_loader':
>    ioctl.c:(.text+0x1794): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: fs/ext4/namei.o: in function `ext4_setent':
>    namei.c:(.text+0x2a0a): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: namei.c:(.text+0x2a41): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/ext4/namei.o: in function `add_dirent_to_buf':
>    namei.c:(.text+0x498c): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: namei.c:(.text+0x49c3): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/ext4/namei.o: in function `ext4_generic_delete_entry':
>    namei.c:(.text+0x7452): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: namei.c:(.text+0x7488): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/ext4/namei.o: in function `ext4_rmdir':
>    namei.c:(.text+0x9f2d): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: namei.c:(.text+0x9f63): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/ext4/resize.o: in function `ext4_update_super.isra.0':
>    resize.c:(.text+0x2566): undefined reference to `atomic64_add_386'
>    /usr/bin/ld: fs/ext4/xattr.o: in function `ext4_xattr_inode_iget':
>    xattr.c:(.text+0x6b6): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: fs/ext4/xattr.o: in function `ext4_xattr_inode_update_ref':
>    xattr.c:(.text+0x776): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xattr.c:(.text+0x79f): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: fs/ext4/xattr.o: in function `ext4_xattr_inode_lookup_create':
>    xattr.c:(.text+0x2baf): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: fs/fat/dir.o: in function `fat_remove_entries':
>    dir.c:(.text+0x2cf0): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: dir.c:(.text+0x2d27): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/fat/misc.o: in function `fat_update_time':
>    misc.c:(.text+0x721): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: misc.c:(.text+0x75e): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/ufs/dir.o: in function `ufs_commit_chunk':
>    dir.c:(.text+0x24): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: dir.c:(.text+0x5b): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/ufs/dir.o: in function `ufs_readdir':
>    dir.c:(.text+0x62e): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: dir.c:(.text+0xa43): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: dir.c:(.text+0xa71): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/ufs/inode.o: in function `ufs_iget':
>    inode.c:(.text+0x4331): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: inode.c:(.text+0x4365): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/affs/dir.o: in function `affs_readdir':
>    dir.c:(.text+0xf5): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: dir.c:(.text+0x480): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: dir.c:(.text+0x4ae): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/affs/amigaffs.o: in function `affs_remove_hash':
>    amigaffs.c:(.text+0x180): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: amigaffs.c:(.text+0x1b6): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/affs/amigaffs.o: in function `affs_insert_hash':
>    amigaffs.c:(.text+0x4ec): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: amigaffs.c:(.text+0x522): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/xfs/xfs_trace.o: in function `trace_event_raw_event_xfs_log_assign_tail_lsn':
>    xfs_trace.c:(.text+0xe948): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_trace.c:(.text+0xe959): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/xfs/xfs_trace.o: in function `trace_event_raw_event_xfs_loggrant_class':
>    xfs_trace.c:(.text+0x1126c): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_trace.c:(.text+0x11286): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_trace.c:(.text+0x112b2): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/xfs/libxfs/xfs_alloc.o: in function `xfs_alloc_read_agf':
> >> xfs_alloc.c:(.text+0x63db): undefined reference to `atomic64_add_386'
>    /usr/bin/ld: fs/xfs/libxfs/xfs_alloc_btree.o: in function `xfs_allocbt_free_block':
> >> xfs_alloc_btree.c:(.text+0x844): undefined reference to `atomic64_dec_386'
>    /usr/bin/ld: fs/xfs/libxfs/xfs_alloc_btree.o: in function `xfs_allocbt_alloc_block':
> >> xfs_alloc_btree.c:(.text+0x8d5): undefined reference to `atomic64_inc_386'
>    /usr/bin/ld: fs/xfs/libxfs/xfs_inode_buf.o: in function `xfs_inode_to_disk':
>    xfs_inode_buf.c:(.text+0x5e7): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/xfs/libxfs/xfs_inode_buf.o: in function `xfs_inode_from_disk':
>    xfs_inode_buf.c:(.text+0x1409): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: fs/xfs/libxfs/xfs_trans_inode.o: in function `xfs_trans_log_inode':
>    xfs_trans_inode.c:(.text+0x490): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_trans_inode.c:(.text+0x4d8): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/xfs/xfs_icache.o: in function `xfs_iget_cache_hit':
>    xfs_icache.c:(.text+0x1d0c): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_icache.c:(.text+0x1d78): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: fs/xfs/xfs_iops.o: in function `xfs_vn_update_time':
>    xfs_iops.c:(.text+0xab2): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_iops.c:(.text+0xaf8): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/xfs/xfs_inode.o: in function `xfs_init_new_inode':
>    xfs_inode.c:(.text+0x27d): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: fs/xfs/xfs_mount.o: in function `xfs_mod_fdblocks':
> >> xfs_mount.c:(.text+0x24a2): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/xfs/xfs_sysfs.o: in function `write_grant_head_show':
>    xfs_sysfs.c:(.text+0x160): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/xfs/xfs_sysfs.o: in function `reserve_grant_head_show':
>    xfs_sysfs.c:(.text+0x1a6): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/xfs/xfs_sysfs.o: in function `log_tail_lsn_show':
>    xfs_sysfs.c:(.text+0x1ec): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/xfs/xfs_log.o: in function `xlog_space_left':
>    xfs_log.c:(.text+0x2cf): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/xfs/xfs_log.o:xfs_log.c:(.text+0x2e1): more undefined references to `atomic64_read_386' follow
>    /usr/bin/ld: fs/xfs/xfs_log.o: in function `xlog_grant_head_init':
>    xfs_log.c:(.text+0x4e5): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: fs/xfs/xfs_log.o: in function `xlog_alloc_log':
>    xfs_log.c:(.text+0xceb): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: xfs_log.c:(.text+0xcf6): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: fs/xfs/xfs_log.o: in function `xlog_assign_tail_lsn_locked':
>    xfs_log.c:(.text+0x1951): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: xfs_log.c:(.text+0x196d): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/xfs/xfs_log.o: in function `xlog_grant_push_threshold':
>    xfs_log.c:(.text+0x1cc5): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_log.c:(.text+0x1ceb): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/xfs/xfs_log.o: in function `xfs_log_regrant':
>    xfs_log.c:(.text+0x258d): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_log.c:(.text+0x25b2): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/xfs/xfs_log.o: in function `xlog_state_do_callback':
>    xfs_log.c:(.text+0x27ab): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_log.c:(.text+0x2820): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: fs/xfs/xfs_log.o: in function `xlog_sync':
>    xfs_log.c:(.text+0x310e): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_log.c:(.text+0x3134): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: xfs_log.c:(.text+0x3243): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_log.c:(.text+0x3269): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/xfs/xfs_log.o: in function `xfs_log_ticket_regrant':
>    xfs_log.c:(.text+0x5483): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_log.c:(.text+0x54be): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: xfs_log.c:(.text+0x54e7): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_log.c:(.text+0x5522): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: xfs_log.c:(.text+0x557f): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_log.c:(.text+0x559c): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/xfs/xfs_log.o: in function `xfs_log_ticket_ungrant':
>    xfs_log.c:(.text+0x5798): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_log.c:(.text+0x57d6): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: xfs_log.c:(.text+0x57f9): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_log.c:(.text+0x5834): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/xfs/xfs_log.o: in function `xfs_log_reserve':
>    xfs_log.c:(.text+0x5d14): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_log.c:(.text+0x5d39): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: xfs_log.c:(.text+0x5da8): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_log.c:(.text+0x5dcd): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/xfs/xfs_inode_item.o: in function `xfs_inode_item_format':
>    xfs_inode_item.c:(.text+0x1da9): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/xfs/xfs_log_recover.o: in function `xlog_set_state':
>    xfs_log_recover.c:(.text+0x1e3): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: xfs_log_recover.c:(.text+0x1fd): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: xfs_log_recover.c:(.text+0x21c): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: xfs_log_recover.c:(.text+0x23b): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: fs/xfs/xfs_log_recover.o: in function `xlog_check_unmount_rec':
>    xfs_log_recover.c:(.text+0x460e): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: fs/xfs/xfs_log_recover.o:xfs_log_recover.c:(.text+0x461f): more undefined references to `atomic64_set_386' follow
>    /usr/bin/ld: fs/xfs/xfs_log_recover.o: in function `xlog_find_tail':
>    xfs_log_recover.c:(.text+0x57be): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xfs_log_recover.c:(.text+0x597a): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/nilfs2/inode.o: in function `nilfs_inode_add_blocks':
>    inode.c:(.text+0x5c4): undefined reference to `atomic64_add_386'
>    /usr/bin/ld: fs/nilfs2/inode.o: in function `nilfs_inode_sub_blocks':
>    inode.c:(.text+0x60f): undefined reference to `atomic64_sub_386'
>    /usr/bin/ld: fs/nilfs2/inode.o: in function `nilfs_new_inode':
>    inode.c:(.text+0x738): undefined reference to `atomic64_inc_386'
>    /usr/bin/ld: fs/nilfs2/inode.o: in function `nilfs_evict_inode':
>    inode.c:(.text+0x1d8e): undefined reference to `atomic64_dec_386'
>    /usr/bin/ld: fs/nilfs2/the_nilfs.o: in function `nilfs_find_or_create_root':
>    the_nilfs.c:(.text+0x17f0): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: the_nilfs.c:(.text+0x17fb): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: fs/nilfs2/segment.o: in function `nilfs_segctor_do_construct':
>    segment.c:(.text+0x4e4d): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: segment.c:(.text+0x4e61): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/nilfs2/ifile.o: in function `nilfs_ifile_count_free_inodes':
>    ifile.c:(.text+0x380): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: fs/btrfs/ctree.o: in function `__tree_mod_log_insert':
>    ctree.c:(.text+0x2f8): undefined reference to `atomic64_inc_return_386'
>    /usr/bin/ld: fs/btrfs/ctree.o: in function `btrfs_get_tree_mod_seq':
>    ctree.c:(.text+0x3f69): undefined reference to `atomic64_inc_return_386'
>    /usr/bin/ld: fs/btrfs/transaction.o: in function `join_transaction':
>    transaction.c:(.text+0x1109): undefined reference to `atomic64_set_386'
>    /usr/bin/ld: fs/btrfs/xattr.o: in function `btrfs_xattr_handler_set_prop':
>    xattr.c:(.text+0xec): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xattr.c:(.text+0x122): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/btrfs/xattr.o: in function `btrfs_setxattr_trans':
>    xattr.c:(.text+0xc24): undefined reference to `atomic64_read_386'
>    /usr/bin/ld: xattr.c:(.text+0xc5a): undefined reference to `cmpxchg8b_emu'
>    /usr/bin/ld: fs/btrfs/volumes.o: in function `create_chunk':
>    volumes.c:(.text+0x242c): undefined reference to `atomic64_sub_386'
>    /usr/bin/ld: fs/btrfs/volumes.o: in function `btrfs_remove_chunk':
>    volumes.c:(.text+0x51e7): undefined reference to `atomic64_add_386'
>    /usr/bin/ld: fs/btrfs/volumes.o: in function `btrfs_shrink_device':
>    volumes.c:(.text+0x7977): undefined reference to `atomic64_sub_386'
>    /usr/bin/ld: volumes.c:(.text+0x7bcc): undefined reference to `atomic64_add_386'
>    /usr/bin/ld: fs/btrfs/volumes.o: in function `btrfs_init_new_device':
>    volumes.c:(.text+0xdc15): undefined reference to `atomic64_add_386'
>    /usr/bin/ld: volumes.c:(.text+0xeb10): undefined reference to `atomic64_sub_386'
>    /usr/bin/ld: fs/btrfs/volumes.o: in function `read_one_dev':
> 
> ---
> 0-DAY CI Kernel Test Service, Intel Corporation
> https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation
  2021-04-12 13:30 ` [PATCH v3 2/2] " Brian Foster
@ 2021-04-12 22:21   ` kernel test robot
  2021-04-14  0:41     ` Darrick J. Wong
  2021-04-14  1:00   ` Darrick J. Wong
  1 sibling, 1 reply; 26+ messages in thread
From: kernel test robot @ 2021-04-12 22:21 UTC (permalink / raw)
  To: Brian Foster, linux-xfs; +Cc: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 17994 bytes --]

Hi Brian,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on xfs-linux/for-next]
[also build test ERROR on v5.12-rc7 next-20210412]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Brian-Foster/xfs-set-aside-allocation-btree-blocks-from-block-reservation/20210412-213222
base:   https://git.kernel.org/pub/scm/fs/xfs/xfs-linux.git for-next
config: um-randconfig-r022-20210412 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/5ffa1f5fa63a4a9c557f90beb5826866fa4aefd0
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Brian-Foster/xfs-set-aside-allocation-btree-blocks-from-block-reservation/20210412-213222
        git checkout 5ffa1f5fa63a4a9c557f90beb5826866fa4aefd0
        # save the attached .config to linux build tree
        make W=1 ARCH=um 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   syscall.c:(.text+0xa023): undefined reference to `atomic64_inc_386'
   /usr/bin/ld: kernel/bpf/syscall.o: in function `bpf_link_put':
   syscall.c:(.text+0xa036): undefined reference to `atomic64_dec_return_386'
   /usr/bin/ld: kernel/bpf/syscall.o: in function `bpf_tracing_prog_attach':
   syscall.c:(.text+0xa423): undefined reference to `atomic64_set_386'
   /usr/bin/ld: kernel/bpf/syscall.o: in function `bpf_link_get_from_fd':
   syscall.c:(.text+0xa8a7): undefined reference to `atomic64_inc_386'
   /usr/bin/ld: kernel/bpf/bpf_iter.o: in function `prepare_seq_file':
   bpf_iter.c:(.text+0x1b2): undefined reference to `atomic64_inc_return_386'
   /usr/bin/ld: fs/proc/task_mmu.o: in function `task_mem':
   task_mmu.c:(.text+0x2bfd): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/ext4/balloc.o: in function `ext4_has_free_clusters':
   balloc.c:(.text+0x94): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/ext4/dir.o: in function `ext4_dir_llseek':
   dir.c:(.text+0x2d3): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/ext4/dir.o: in function `ext4_readdir':
   dir.c:(.text+0x84b): undefined reference to `atomic64_read_386'
   /usr/bin/ld: dir.c:(.text+0xc71): undefined reference to `atomic64_read_386'
   /usr/bin/ld: dir.c:(.text+0xc9f): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: dir.c:(.text+0xe1b): undefined reference to `atomic64_read_386'
   /usr/bin/ld: dir.c:(.text+0xe44): undefined reference to `atomic64_read_386'
   /usr/bin/ld: dir.c:(.text+0xe72): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/ext4/ialloc.o: in function `get_orlov_stats':
   ialloc.c:(.text+0x205): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/ext4/inline.o: in function `ext4_add_dirent_to_inline.isra.0':
   inline.c:(.text+0x14a9): undefined reference to `atomic64_read_386'
   /usr/bin/ld: inline.c:(.text+0x14df): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/ext4/inline.o: in function `ext4_read_inline_dir':
   inline.c:(.text+0x3dd8): undefined reference to `atomic64_read_386'
   /usr/bin/ld: inline.c:(.text+0x3ea7): undefined reference to `atomic64_read_386'
   /usr/bin/ld: inline.c:(.text+0x3ed5): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/ext4/inode.o: in function `ext4_do_update_inode':
   inode.c:(.text+0x3f13): undefined reference to `atomic64_read_386'
   /usr/bin/ld: inode.c:(.text+0x44bb): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/ext4/inode.o: in function `__ext4_iget':
   inode.c:(.text+0x7c4d): undefined reference to `atomic64_set_386'
   /usr/bin/ld: inode.c:(.text+0x8385): undefined reference to `atomic64_set_386'
   /usr/bin/ld: fs/ext4/inode.o: in function `ext4_mark_iloc_dirty':
   inode.c:(.text+0x8a69): undefined reference to `atomic64_read_386'
   /usr/bin/ld: inode.c:(.text+0x8aa0): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/ext4/inode.o: in function `ext4_setattr':
   inode.c:(.text+0xf4ed): undefined reference to `atomic64_read_386'
   /usr/bin/ld: inode.c:(.text+0xf523): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/ext4/ioctl.o: in function `swap_inode_boot_loader':
   ioctl.c:(.text+0x1794): undefined reference to `atomic64_set_386'
   /usr/bin/ld: fs/ext4/namei.o: in function `ext4_setent':
   namei.c:(.text+0x2a0a): undefined reference to `atomic64_read_386'
   /usr/bin/ld: namei.c:(.text+0x2a41): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/ext4/namei.o: in function `add_dirent_to_buf':
   namei.c:(.text+0x498c): undefined reference to `atomic64_read_386'
   /usr/bin/ld: namei.c:(.text+0x49c3): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/ext4/namei.o: in function `ext4_generic_delete_entry':
   namei.c:(.text+0x7452): undefined reference to `atomic64_read_386'
   /usr/bin/ld: namei.c:(.text+0x7488): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/ext4/namei.o: in function `ext4_rmdir':
   namei.c:(.text+0x9f2d): undefined reference to `atomic64_read_386'
   /usr/bin/ld: namei.c:(.text+0x9f63): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/ext4/resize.o: in function `ext4_update_super.isra.0':
   resize.c:(.text+0x2566): undefined reference to `atomic64_add_386'
   /usr/bin/ld: fs/ext4/xattr.o: in function `ext4_xattr_inode_iget':
   xattr.c:(.text+0x6b6): undefined reference to `atomic64_set_386'
   /usr/bin/ld: fs/ext4/xattr.o: in function `ext4_xattr_inode_update_ref':
   xattr.c:(.text+0x776): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xattr.c:(.text+0x79f): undefined reference to `atomic64_set_386'
   /usr/bin/ld: fs/ext4/xattr.o: in function `ext4_xattr_inode_lookup_create':
   xattr.c:(.text+0x2baf): undefined reference to `atomic64_set_386'
   /usr/bin/ld: fs/fat/dir.o: in function `fat_remove_entries':
   dir.c:(.text+0x2cf0): undefined reference to `atomic64_read_386'
   /usr/bin/ld: dir.c:(.text+0x2d27): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/fat/misc.o: in function `fat_update_time':
   misc.c:(.text+0x721): undefined reference to `atomic64_read_386'
   /usr/bin/ld: misc.c:(.text+0x75e): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/ufs/dir.o: in function `ufs_commit_chunk':
   dir.c:(.text+0x24): undefined reference to `atomic64_read_386'
   /usr/bin/ld: dir.c:(.text+0x5b): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/ufs/dir.o: in function `ufs_readdir':
   dir.c:(.text+0x62e): undefined reference to `atomic64_read_386'
   /usr/bin/ld: dir.c:(.text+0xa43): undefined reference to `atomic64_read_386'
   /usr/bin/ld: dir.c:(.text+0xa71): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/ufs/inode.o: in function `ufs_iget':
   inode.c:(.text+0x4331): undefined reference to `atomic64_read_386'
   /usr/bin/ld: inode.c:(.text+0x4365): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/affs/dir.o: in function `affs_readdir':
   dir.c:(.text+0xf5): undefined reference to `atomic64_read_386'
   /usr/bin/ld: dir.c:(.text+0x480): undefined reference to `atomic64_read_386'
   /usr/bin/ld: dir.c:(.text+0x4ae): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/affs/amigaffs.o: in function `affs_remove_hash':
   amigaffs.c:(.text+0x180): undefined reference to `atomic64_read_386'
   /usr/bin/ld: amigaffs.c:(.text+0x1b6): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/affs/amigaffs.o: in function `affs_insert_hash':
   amigaffs.c:(.text+0x4ec): undefined reference to `atomic64_read_386'
   /usr/bin/ld: amigaffs.c:(.text+0x522): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/xfs/xfs_trace.o: in function `trace_event_raw_event_xfs_log_assign_tail_lsn':
   xfs_trace.c:(.text+0xe948): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_trace.c:(.text+0xe959): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/xfs/xfs_trace.o: in function `trace_event_raw_event_xfs_loggrant_class':
   xfs_trace.c:(.text+0x1126c): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_trace.c:(.text+0x11286): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_trace.c:(.text+0x112b2): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/xfs/libxfs/xfs_alloc.o: in function `xfs_alloc_read_agf':
>> xfs_alloc.c:(.text+0x63db): undefined reference to `atomic64_add_386'
   /usr/bin/ld: fs/xfs/libxfs/xfs_alloc_btree.o: in function `xfs_allocbt_free_block':
>> xfs_alloc_btree.c:(.text+0x844): undefined reference to `atomic64_dec_386'
   /usr/bin/ld: fs/xfs/libxfs/xfs_alloc_btree.o: in function `xfs_allocbt_alloc_block':
>> xfs_alloc_btree.c:(.text+0x8d5): undefined reference to `atomic64_inc_386'
   /usr/bin/ld: fs/xfs/libxfs/xfs_inode_buf.o: in function `xfs_inode_to_disk':
   xfs_inode_buf.c:(.text+0x5e7): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/xfs/libxfs/xfs_inode_buf.o: in function `xfs_inode_from_disk':
   xfs_inode_buf.c:(.text+0x1409): undefined reference to `atomic64_set_386'
   /usr/bin/ld: fs/xfs/libxfs/xfs_trans_inode.o: in function `xfs_trans_log_inode':
   xfs_trans_inode.c:(.text+0x490): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_trans_inode.c:(.text+0x4d8): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/xfs/xfs_icache.o: in function `xfs_iget_cache_hit':
   xfs_icache.c:(.text+0x1d0c): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_icache.c:(.text+0x1d78): undefined reference to `atomic64_set_386'
   /usr/bin/ld: fs/xfs/xfs_iops.o: in function `xfs_vn_update_time':
   xfs_iops.c:(.text+0xab2): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_iops.c:(.text+0xaf8): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/xfs/xfs_inode.o: in function `xfs_init_new_inode':
   xfs_inode.c:(.text+0x27d): undefined reference to `atomic64_set_386'
   /usr/bin/ld: fs/xfs/xfs_mount.o: in function `xfs_mod_fdblocks':
>> xfs_mount.c:(.text+0x24a2): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/xfs/xfs_sysfs.o: in function `write_grant_head_show':
   xfs_sysfs.c:(.text+0x160): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/xfs/xfs_sysfs.o: in function `reserve_grant_head_show':
   xfs_sysfs.c:(.text+0x1a6): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/xfs/xfs_sysfs.o: in function `log_tail_lsn_show':
   xfs_sysfs.c:(.text+0x1ec): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/xfs/xfs_log.o: in function `xlog_space_left':
   xfs_log.c:(.text+0x2cf): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/xfs/xfs_log.o:xfs_log.c:(.text+0x2e1): more undefined references to `atomic64_read_386' follow
   /usr/bin/ld: fs/xfs/xfs_log.o: in function `xlog_grant_head_init':
   xfs_log.c:(.text+0x4e5): undefined reference to `atomic64_set_386'
   /usr/bin/ld: fs/xfs/xfs_log.o: in function `xlog_alloc_log':
   xfs_log.c:(.text+0xceb): undefined reference to `atomic64_set_386'
   /usr/bin/ld: xfs_log.c:(.text+0xcf6): undefined reference to `atomic64_set_386'
   /usr/bin/ld: fs/xfs/xfs_log.o: in function `xlog_assign_tail_lsn_locked':
   xfs_log.c:(.text+0x1951): undefined reference to `atomic64_set_386'
   /usr/bin/ld: xfs_log.c:(.text+0x196d): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/xfs/xfs_log.o: in function `xlog_grant_push_threshold':
   xfs_log.c:(.text+0x1cc5): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_log.c:(.text+0x1ceb): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/xfs/xfs_log.o: in function `xfs_log_regrant':
   xfs_log.c:(.text+0x258d): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_log.c:(.text+0x25b2): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/xfs/xfs_log.o: in function `xlog_state_do_callback':
   xfs_log.c:(.text+0x27ab): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_log.c:(.text+0x2820): undefined reference to `atomic64_set_386'
   /usr/bin/ld: fs/xfs/xfs_log.o: in function `xlog_sync':
   xfs_log.c:(.text+0x310e): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_log.c:(.text+0x3134): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: xfs_log.c:(.text+0x3243): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_log.c:(.text+0x3269): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/xfs/xfs_log.o: in function `xfs_log_ticket_regrant':
   xfs_log.c:(.text+0x5483): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_log.c:(.text+0x54be): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: xfs_log.c:(.text+0x54e7): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_log.c:(.text+0x5522): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: xfs_log.c:(.text+0x557f): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_log.c:(.text+0x559c): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/xfs/xfs_log.o: in function `xfs_log_ticket_ungrant':
   xfs_log.c:(.text+0x5798): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_log.c:(.text+0x57d6): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: xfs_log.c:(.text+0x57f9): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_log.c:(.text+0x5834): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/xfs/xfs_log.o: in function `xfs_log_reserve':
   xfs_log.c:(.text+0x5d14): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_log.c:(.text+0x5d39): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: xfs_log.c:(.text+0x5da8): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_log.c:(.text+0x5dcd): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/xfs/xfs_inode_item.o: in function `xfs_inode_item_format':
   xfs_inode_item.c:(.text+0x1da9): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/xfs/xfs_log_recover.o: in function `xlog_set_state':
   xfs_log_recover.c:(.text+0x1e3): undefined reference to `atomic64_set_386'
   /usr/bin/ld: xfs_log_recover.c:(.text+0x1fd): undefined reference to `atomic64_set_386'
   /usr/bin/ld: xfs_log_recover.c:(.text+0x21c): undefined reference to `atomic64_set_386'
   /usr/bin/ld: xfs_log_recover.c:(.text+0x23b): undefined reference to `atomic64_set_386'
   /usr/bin/ld: fs/xfs/xfs_log_recover.o: in function `xlog_check_unmount_rec':
   xfs_log_recover.c:(.text+0x460e): undefined reference to `atomic64_set_386'
   /usr/bin/ld: fs/xfs/xfs_log_recover.o:xfs_log_recover.c:(.text+0x461f): more undefined references to `atomic64_set_386' follow
   /usr/bin/ld: fs/xfs/xfs_log_recover.o: in function `xlog_find_tail':
   xfs_log_recover.c:(.text+0x57be): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xfs_log_recover.c:(.text+0x597a): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/nilfs2/inode.o: in function `nilfs_inode_add_blocks':
   inode.c:(.text+0x5c4): undefined reference to `atomic64_add_386'
   /usr/bin/ld: fs/nilfs2/inode.o: in function `nilfs_inode_sub_blocks':
   inode.c:(.text+0x60f): undefined reference to `atomic64_sub_386'
   /usr/bin/ld: fs/nilfs2/inode.o: in function `nilfs_new_inode':
   inode.c:(.text+0x738): undefined reference to `atomic64_inc_386'
   /usr/bin/ld: fs/nilfs2/inode.o: in function `nilfs_evict_inode':
   inode.c:(.text+0x1d8e): undefined reference to `atomic64_dec_386'
   /usr/bin/ld: fs/nilfs2/the_nilfs.o: in function `nilfs_find_or_create_root':
   the_nilfs.c:(.text+0x17f0): undefined reference to `atomic64_set_386'
   /usr/bin/ld: the_nilfs.c:(.text+0x17fb): undefined reference to `atomic64_set_386'
   /usr/bin/ld: fs/nilfs2/segment.o: in function `nilfs_segctor_do_construct':
   segment.c:(.text+0x4e4d): undefined reference to `atomic64_read_386'
   /usr/bin/ld: segment.c:(.text+0x4e61): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/nilfs2/ifile.o: in function `nilfs_ifile_count_free_inodes':
   ifile.c:(.text+0x380): undefined reference to `atomic64_read_386'
   /usr/bin/ld: fs/btrfs/ctree.o: in function `__tree_mod_log_insert':
   ctree.c:(.text+0x2f8): undefined reference to `atomic64_inc_return_386'
   /usr/bin/ld: fs/btrfs/ctree.o: in function `btrfs_get_tree_mod_seq':
   ctree.c:(.text+0x3f69): undefined reference to `atomic64_inc_return_386'
   /usr/bin/ld: fs/btrfs/transaction.o: in function `join_transaction':
   transaction.c:(.text+0x1109): undefined reference to `atomic64_set_386'
   /usr/bin/ld: fs/btrfs/xattr.o: in function `btrfs_xattr_handler_set_prop':
   xattr.c:(.text+0xec): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xattr.c:(.text+0x122): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/btrfs/xattr.o: in function `btrfs_setxattr_trans':
   xattr.c:(.text+0xc24): undefined reference to `atomic64_read_386'
   /usr/bin/ld: xattr.c:(.text+0xc5a): undefined reference to `cmpxchg8b_emu'
   /usr/bin/ld: fs/btrfs/volumes.o: in function `create_chunk':
   volumes.c:(.text+0x242c): undefined reference to `atomic64_sub_386'
   /usr/bin/ld: fs/btrfs/volumes.o: in function `btrfs_remove_chunk':
   volumes.c:(.text+0x51e7): undefined reference to `atomic64_add_386'
   /usr/bin/ld: fs/btrfs/volumes.o: in function `btrfs_shrink_device':
   volumes.c:(.text+0x7977): undefined reference to `atomic64_sub_386'
   /usr/bin/ld: volumes.c:(.text+0x7bcc): undefined reference to `atomic64_add_386'
   /usr/bin/ld: fs/btrfs/volumes.o: in function `btrfs_init_new_device':
   volumes.c:(.text+0xdc15): undefined reference to `atomic64_add_386'
   /usr/bin/ld: volumes.c:(.text+0xeb10): undefined reference to `atomic64_sub_386'
   /usr/bin/ld: fs/btrfs/volumes.o: in function `read_one_dev':

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 10991 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation
  2021-04-12 13:30 [PATCH v3 REPOST " Brian Foster
@ 2021-04-12 13:30 ` Brian Foster
  2021-04-12 22:21   ` kernel test robot
  2021-04-14  1:00   ` Darrick J. Wong
  0 siblings, 2 replies; 26+ messages in thread
From: Brian Foster @ 2021-04-12 13:30 UTC (permalink / raw)
  To: linux-xfs

The blocks used for allocation btrees (bnobt and countbt) are
technically considered free space. This is because as free space is
used, allocbt blocks are removed and naturally become available for
traditional allocation. However, this means that a significant
portion of free space may consist of in-use btree blocks if free
space is severely fragmented.

On large filesystems with large perag reservations, this can lead to
a rare but nasty condition where a significant amount of physical
free space is available, but the majority of actual usable blocks
consist of in-use allocbt blocks. We have a record of a (~12TB, 32
AG) filesystem with multiple AGs in a state with ~2.5GB or so free
blocks tracked across ~300 total allocbt blocks, but effectively at
100% full because the the free space is entirely consumed by
refcountbt perag reservation.

Such a large perag reservation is by design on large filesystems.
The problem is that because the free space is so fragmented, this AG
contributes the 300 or so allocbt blocks to the global counters as
free space. If this pattern repeats across enough AGs, the
filesystem lands in a state where global block reservation can
outrun physical block availability. For example, a streaming
buffered write on the affected filesystem continues to allow delayed
allocation beyond the point where writeback starts to fail due to
physical block allocation failures. The expected behavior is for the
delalloc block reservation to fail gracefully with -ENOSPC before
physical block allocation failure is a possibility.

To address this problem, introduce an in-core counter to track the
sum of all allocbt blocks in use by the filesystem. Use the new
counter to set these blocks aside at reservation time and thus
ensure they cannot be reserved until truly available. Since this is
only necessary when perag reservations are active and the counter
requires a read of each AGF to fully populate, only enforce on perag
res enabled filesystems. This allows initialization of the counter
at ->pagf_init time because the perag reservation init code reads
each AGF at mount time.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/libxfs/xfs_alloc.c       | 12 ++++++++++++
 fs/xfs/libxfs/xfs_alloc_btree.c |  2 ++
 fs/xfs/xfs_mount.c              | 18 +++++++++++++++++-
 fs/xfs/xfs_mount.h              |  6 ++++++
 4 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
index aaa19101bb2a..144e2d68245c 100644
--- a/fs/xfs/libxfs/xfs_alloc.c
+++ b/fs/xfs/libxfs/xfs_alloc.c
@@ -3036,6 +3036,7 @@ xfs_alloc_read_agf(
 	struct xfs_agf		*agf;		/* ag freelist header */
 	struct xfs_perag	*pag;		/* per allocation group data */
 	int			error;
+	uint32_t		allocbt_blks;
 
 	trace_xfs_alloc_read_agf(mp, agno);
 
@@ -3066,6 +3067,17 @@ xfs_alloc_read_agf(
 		pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level);
 		pag->pagf_init = 1;
 		pag->pagf_agflreset = xfs_agfl_needs_reset(mp, agf);
+
+		/*
+		 * Update the global in-core allocbt block counter. Filter
+		 * rmapbt blocks from the on-disk counter because those are
+		 * managed by perag reservation.
+		 */
+		if (pag->pagf_btreeblks > be32_to_cpu(agf->agf_rmap_blocks)) {
+			allocbt_blks = pag->pagf_btreeblks -
+					be32_to_cpu(agf->agf_rmap_blocks);
+			atomic64_add(allocbt_blks, &mp->m_allocbt_blks);
+		}
 	}
 #ifdef DEBUG
 	else if (!XFS_FORCED_SHUTDOWN(mp)) {
diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c
index 8e01231b308e..9f5a45f7baed 100644
--- a/fs/xfs/libxfs/xfs_alloc_btree.c
+++ b/fs/xfs/libxfs/xfs_alloc_btree.c
@@ -71,6 +71,7 @@ xfs_allocbt_alloc_block(
 		return 0;
 	}
 
+	atomic64_inc(&cur->bc_mp->m_allocbt_blks);
 	xfs_extent_busy_reuse(cur->bc_mp, cur->bc_ag.agno, bno, 1, false);
 
 	xfs_trans_agbtree_delta(cur->bc_tp, 1);
@@ -95,6 +96,7 @@ xfs_allocbt_free_block(
 	if (error)
 		return error;
 
+	atomic64_dec(&cur->bc_mp->m_allocbt_blks);
 	xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1,
 			      XFS_EXTENT_BUSY_SKIP_DISCARD);
 	xfs_trans_agbtree_delta(cur->bc_tp, -1);
diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
index cb1e2c4702c3..1f835c375a89 100644
--- a/fs/xfs/xfs_mount.c
+++ b/fs/xfs/xfs_mount.c
@@ -1188,6 +1188,7 @@ xfs_mod_fdblocks(
 	int64_t			lcounter;
 	long long		res_used;
 	s32			batch;
+	uint64_t		set_aside;
 
 	if (delta > 0) {
 		/*
@@ -1227,8 +1228,23 @@ xfs_mod_fdblocks(
 	else
 		batch = XFS_FDBLOCKS_BATCH;
 
+	/*
+	 * Set aside allocbt blocks because these blocks are tracked as free
+	 * space but not available for allocation. Technically this means that a
+	 * single reservation cannot consume all remaining free space, but the
+	 * ratio of allocbt blocks to usable free blocks should be rather small.
+	 * The tradeoff without this is that filesystems that maintain high
+	 * perag block reservations can over reserve physical block availability
+	 * and fail physical allocation, which leads to much more serious
+	 * problems (i.e. transaction abort, pagecache discards, etc.) than
+	 * slightly premature -ENOSPC.
+	 */
+	set_aside = mp->m_alloc_set_aside;
+	if (mp->m_has_agresv)
+		set_aside += atomic64_read(&mp->m_allocbt_blks);
+
 	percpu_counter_add_batch(&mp->m_fdblocks, delta, batch);
-	if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside,
+	if (__percpu_counter_compare(&mp->m_fdblocks, set_aside,
 				     XFS_FDBLOCKS_BATCH) >= 0) {
 		/* we had space! */
 		return 0;
diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
index 8847ffd29777..80b9f37f65e6 100644
--- a/fs/xfs/xfs_mount.h
+++ b/fs/xfs/xfs_mount.h
@@ -171,6 +171,12 @@ typedef struct xfs_mount {
 	 * extents or anything related to the rt device.
 	 */
 	struct percpu_counter	m_delalloc_blks;
+	/*
+	 * Global count of allocation btree blocks in use across all AGs. Only
+	 * used when perag reservation is enabled. Helps prevent block
+	 * reservation from attempting to reserve allocation btree blocks.
+	 */
+	atomic64_t		m_allocbt_blks;
 
 	struct radix_tree_root	m_perag_tree;	/* per-ag accounting info */
 	spinlock_t		m_perag_lock;	/* lock for m_perag_tree */
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2021-04-22 12:42 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-18 16:17 [PATCH v3 0/2] xfs: set aside allocation btree blocks from block reservation Brian Foster
2021-03-18 16:17 ` [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active Brian Foster
2021-03-18 20:55   ` Dave Chinner
2021-03-18 22:19     ` Darrick J. Wong
2021-03-19  1:05       ` Dave Chinner
2021-03-19  1:34         ` Darrick J. Wong
2021-03-19 14:54           ` Brian Foster
2021-03-19  1:43         ` Dave Chinner
2021-03-19  1:48           ` Darrick J. Wong
2021-03-19  2:08             ` Dave Chinner
2021-03-19 14:54           ` Brian Foster
2021-03-23 22:40             ` Dave Chinner
2021-03-24 14:24               ` Brian Foster
2021-03-18 16:17 ` [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation Brian Foster
2021-03-18 20:31   ` Darrick J. Wong
2021-03-19 15:00     ` Brian Foster
2021-03-27  1:34       ` Darrick J. Wong
2021-03-27 14:51         ` Brian Foster
2021-04-09 14:17 ` [PATCH v3 0/2] " Brian Foster
2021-04-12 13:30 [PATCH v3 REPOST " Brian Foster
2021-04-12 13:30 ` [PATCH v3 2/2] " Brian Foster
2021-04-12 22:21   ` kernel test robot
2021-04-14  0:41     ` Darrick J. Wong
2021-04-14  1:00   ` Darrick J. Wong
2021-04-20 16:24     ` Brian Foster
2021-04-21 23:40       ` Darrick J. Wong
2021-04-22 12:41         ` Brian Foster

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).