linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/9] btrfs: block group cleanups
@ 2022-08-05 14:14 Josef Bacik
  2022-08-05 14:14 ` [PATCH v3 1/9] btrfs: use btrfs_fs_closing for background bg work Josef Bacik
                   ` (9 more replies)
  0 siblings, 10 replies; 12+ messages in thread
From: Josef Bacik @ 2022-08-05 14:14 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

v2->v3:
- I removed a check for FS_OPEN in the first patch which was incorrect.

v1->v2:
- I'm an idiot and didn't rebase properly, so adding the two other cleanups I
  had that I didn't send.
- Rebased onto a recent misc-next and fixed the compile errors.
- Realized that with the new zoned patches that caused the compile error that
  btrfs_update_space_info needed to be cleaned up, so added patches for that.

--- Original email ---

I'm reworking our relocation and delete unused block group workqueues which
require some cleanups of how we deal with flags on the block group.  We've had a
bit field for various flags on the block group for a while, but there's a subtle
gotcha with this bitfield in that you have to protect every modification with
bg->lock in order to not mess with the values, and there were a few places that
we weren't holding the lock.

Rework these to be normal flags, and then go behind this conversion and cleanup
some of the usage of the different flags.  Additionally there's a cleanup around
when to break out of the background workers.  Thanks,

Josef

Josef Bacik (9):
  btrfs: use btrfs_fs_closing for background bg work
  btrfs: simplify btrfs_update_space_info
  btrfs: handle space_info setting of bg in btrfs_add_bg_to_space_info
  btrfs: convert block group bit field to use bit helpers
  btrfs: remove block_group->lock protection for TO_COPY
  btrfs: simplify btrfs_put_block_group_cache
  btrfs: remove BLOCK_GROUP_FLAG_HAS_CACHING_CTL
  btrfs: remove bg->lock protection for relocation repair flag
  btrfs: delete btrfs_wait_space_cache_v1_finished

 fs/btrfs/block-group.c      | 158 ++++++++++++++----------------------
 fs/btrfs/block-group.h      |  21 ++---
 fs/btrfs/dev-replace.c      |  11 +--
 fs/btrfs/extent-tree.c      |   7 +-
 fs/btrfs/free-space-cache.c |  18 ++--
 fs/btrfs/scrub.c            |  16 ++--
 fs/btrfs/space-info.c       |  38 +++++----
 fs/btrfs/space-info.h       |   6 +-
 fs/btrfs/volumes.c          |  16 ++--
 fs/btrfs/zoned.c            |  34 +++++---
 10 files changed, 148 insertions(+), 177 deletions(-)

-- 
2.26.3


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v3 1/9] btrfs: use btrfs_fs_closing for background bg work
  2022-08-05 14:14 [PATCH v3 0/9] btrfs: block group cleanups Josef Bacik
@ 2022-08-05 14:14 ` Josef Bacik
  2022-08-05 14:14 ` [PATCH v3 2/9] btrfs: simplify btrfs_update_space_info Josef Bacik
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Josef Bacik @ 2022-08-05 14:14 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

For both unused bg deletion and async balance work we'll happily run if
the fs is closing.  However I want to move these to their own worker
thread, and they can be long running jobs, so add a check to see if
we're closing and simply bail.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/block-group.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 993aca2f1e18..fd3bf13d5b40 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -1321,6 +1321,9 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
 	if (!test_bit(BTRFS_FS_OPEN, &fs_info->flags))
 		return;
 
+	if (btrfs_fs_closing(fs_info))
+		return;
+
 	/*
 	 * Long running balances can keep us blocked here for eternity, so
 	 * simply skip deletion if we're unable to get the mutex.
@@ -1560,6 +1563,9 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
 	if (!test_bit(BTRFS_FS_OPEN, &fs_info->flags))
 		return;
 
+	if (btrfs_fs_closing(fs_info))
+		return;
+
 	if (!btrfs_should_reclaim(fs_info))
 		return;
 
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 2/9] btrfs: simplify btrfs_update_space_info
  2022-08-05 14:14 [PATCH v3 0/9] btrfs: block group cleanups Josef Bacik
  2022-08-05 14:14 ` [PATCH v3 1/9] btrfs: use btrfs_fs_closing for background bg work Josef Bacik
@ 2022-08-05 14:14 ` Josef Bacik
  2022-08-05 14:14 ` [PATCH v3 3/9] btrfs: handle space_info setting of bg in btrfs_add_bg_to_space_info Josef Bacik
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Josef Bacik @ 2022-08-05 14:14 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

This function has grown a bunch of new arguments, and it just boils down
to passing in all the block group fields as arguments.  Simplify this by
passing in the block group itself and updating the space_info fields
based on the block group fields directly.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/block-group.c | 28 +++++++++++-----------------
 fs/btrfs/space-info.c  | 29 ++++++++++++++---------------
 fs/btrfs/space-info.h  |  7 +++----
 3 files changed, 28 insertions(+), 36 deletions(-)

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index fd3bf13d5b40..9790f01de93e 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -2118,10 +2118,7 @@ static int read_one_block_group(struct btrfs_fs_info *info,
 		goto error;
 	}
 	trace_btrfs_add_block_group(info, cache, 0);
-	btrfs_update_space_info(info, cache->flags, cache->length,
-				cache->used, cache->bytes_super,
-				cache->zone_unusable, cache->zone_is_active,
-				&space_info);
+	btrfs_add_bg_to_space_info(info, cache, &space_info);
 
 	cache->space_info = space_info;
 
@@ -2190,8 +2187,7 @@ static int fill_dummy_bgs(struct btrfs_fs_info *fs_info)
 			break;
 		}
 
-		btrfs_update_space_info(fs_info, bg->flags, em->len, em->len,
-					0, 0, false, &space_info);
+		btrfs_add_bg_to_space_info(fs_info, bg, &space_info);
 		bg->space_info = space_info;
 		link_block_group(bg);
 
@@ -2542,14 +2538,6 @@ struct btrfs_block_group *btrfs_make_block_group(struct btrfs_trans_handle *tran
 
 	btrfs_free_excluded_extents(cache);
 
-#ifdef CONFIG_BTRFS_DEBUG
-	if (btrfs_should_fragment_free_space(cache)) {
-		u64 new_bytes_used = size - bytes_used;
-
-		bytes_used += new_bytes_used >> 1;
-		fragment_free_space(cache);
-	}
-#endif
 	/*
 	 * Ensure the corresponding space_info object is created and
 	 * assigned to our block group. We want our bg to be added to the rbtree
@@ -2570,11 +2558,17 @@ struct btrfs_block_group *btrfs_make_block_group(struct btrfs_trans_handle *tran
 	 * the rbtree, update the space info's counters.
 	 */
 	trace_btrfs_add_block_group(fs_info, cache, 1);
-	btrfs_update_space_info(fs_info, cache->flags, size, bytes_used,
-				cache->bytes_super, cache->zone_unusable,
-				cache->zone_is_active, &cache->space_info);
+	btrfs_add_bg_to_space_info(fs_info, cache, &cache->space_info);
 	btrfs_update_global_block_rsv(fs_info);
 
+#ifdef CONFIG_BTRFS_DEBUG
+	if (btrfs_should_fragment_free_space(cache)) {
+		u64 new_bytes_used = size - bytes_used;
+
+		cache->space_info->bytes_used += new_bytes_used >> 1;
+		fragment_free_space(cache);
+	}
+#endif
 	link_block_group(cache);
 
 	list_add_tail(&cache->bg_list, &trans->new_bgs);
diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
index d0cbeb7ae81c..a9433d19d827 100644
--- a/fs/btrfs/space-info.c
+++ b/fs/btrfs/space-info.c
@@ -293,28 +293,27 @@ int btrfs_init_space_info(struct btrfs_fs_info *fs_info)
 	return ret;
 }
 
-void btrfs_update_space_info(struct btrfs_fs_info *info, u64 flags,
-			     u64 total_bytes, u64 bytes_used,
-			     u64 bytes_readonly, u64 bytes_zone_unusable,
-			     bool active, struct btrfs_space_info **space_info)
+void btrfs_add_bg_to_space_info(struct btrfs_fs_info *info,
+				struct btrfs_block_group *block_group,
+				struct btrfs_space_info **space_info)
 {
 	struct btrfs_space_info *found;
 	int factor;
 
-	factor = btrfs_bg_type_to_factor(flags);
+	factor = btrfs_bg_type_to_factor(block_group->flags);
 
-	found = btrfs_find_space_info(info, flags);
+	found = btrfs_find_space_info(info, block_group->flags);
 	ASSERT(found);
 	spin_lock(&found->lock);
-	found->total_bytes += total_bytes;
-	if (active)
-		found->active_total_bytes += total_bytes;
-	found->disk_total += total_bytes * factor;
-	found->bytes_used += bytes_used;
-	found->disk_used += bytes_used * factor;
-	found->bytes_readonly += bytes_readonly;
-	found->bytes_zone_unusable += bytes_zone_unusable;
-	if (total_bytes > 0)
+	found->total_bytes += block_group->length;
+	if (block_group->zone_is_active)
+		found->active_total_bytes += block_group->length;
+	found->disk_total += block_group->length * factor;
+	found->bytes_used += block_group->used;
+	found->disk_used += block_group->used * factor;
+	found->bytes_readonly += block_group->bytes_super;
+	found->bytes_zone_unusable += block_group->zone_unusable;
+	if (block_group->length > 0)
 		found->full = 0;
 	btrfs_try_granting_tickets(info, found);
 	spin_unlock(&found->lock);
diff --git a/fs/btrfs/space-info.h b/fs/btrfs/space-info.h
index 12fd6147f92d..101e83828ee5 100644
--- a/fs/btrfs/space-info.h
+++ b/fs/btrfs/space-info.h
@@ -123,10 +123,9 @@ DECLARE_SPACE_INFO_UPDATE(bytes_may_use, "space_info");
 DECLARE_SPACE_INFO_UPDATE(bytes_pinned, "pinned");
 
 int btrfs_init_space_info(struct btrfs_fs_info *fs_info);
-void btrfs_update_space_info(struct btrfs_fs_info *info, u64 flags,
-			     u64 total_bytes, u64 bytes_used,
-			     u64 bytes_readonly, u64 bytes_zone_unusable,
-			     bool active, struct btrfs_space_info **space_info);
+void btrfs_add_bg_to_space_info(struct btrfs_fs_info *info,
+				struct btrfs_block_group *block_group,
+				struct btrfs_space_info **space_info);
 void btrfs_update_space_info_chunk_size(struct btrfs_space_info *space_info,
 					u64 chunk_size);
 struct btrfs_space_info *btrfs_find_space_info(struct btrfs_fs_info *info,
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 3/9] btrfs: handle space_info setting of bg in btrfs_add_bg_to_space_info
  2022-08-05 14:14 [PATCH v3 0/9] btrfs: block group cleanups Josef Bacik
  2022-08-05 14:14 ` [PATCH v3 1/9] btrfs: use btrfs_fs_closing for background bg work Josef Bacik
  2022-08-05 14:14 ` [PATCH v3 2/9] btrfs: simplify btrfs_update_space_info Josef Bacik
@ 2022-08-05 14:14 ` Josef Bacik
  2022-08-05 14:14 ` [PATCH v3 4/9] btrfs: convert block group bit field to use bit helpers Josef Bacik
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Josef Bacik @ 2022-08-05 14:14 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

We previously had the pattern of

btrfs_update_space_info(all, the, bg, fields, &space_info);
link_block_group(bg);
bg->space_info = space_info;

Now that we're passing the bg into btrfs_add_bg_to_space_info we can do
the linking in that function, transforming this to simply

btrfs_add_bg_to_space_info(fs_info, bg);

and put the link_block_group() and bg->space_info assignment directly in
btrfs_add_bg_to_space_info.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/block-group.c | 25 +++----------------------
 fs/btrfs/space-info.c  | 13 +++++++++----
 fs/btrfs/space-info.h  |  3 +--
 3 files changed, 13 insertions(+), 28 deletions(-)

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 9790f01de93e..5f062c5d3b6f 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -1913,16 +1913,6 @@ static int exclude_super_stripes(struct btrfs_block_group *cache)
 	return 0;
 }
 
-static void link_block_group(struct btrfs_block_group *cache)
-{
-	struct btrfs_space_info *space_info = cache->space_info;
-	int index = btrfs_bg_flags_to_raid_index(cache->flags);
-
-	down_write(&space_info->groups_sem);
-	list_add_tail(&cache->list, &space_info->block_groups[index]);
-	up_write(&space_info->groups_sem);
-}
-
 static struct btrfs_block_group *btrfs_create_block_group_cache(
 		struct btrfs_fs_info *fs_info, u64 start)
 {
@@ -2025,7 +2015,6 @@ static int read_one_block_group(struct btrfs_fs_info *info,
 				int need_clear)
 {
 	struct btrfs_block_group *cache;
-	struct btrfs_space_info *space_info;
 	const bool mixed = btrfs_fs_incompat(info, MIXED_GROUPS);
 	int ret;
 
@@ -2118,11 +2107,7 @@ static int read_one_block_group(struct btrfs_fs_info *info,
 		goto error;
 	}
 	trace_btrfs_add_block_group(info, cache, 0);
-	btrfs_add_bg_to_space_info(info, cache, &space_info);
-
-	cache->space_info = space_info;
-
-	link_block_group(cache);
+	btrfs_add_bg_to_space_info(info, cache);
 
 	set_avail_alloc_bits(info, cache->flags);
 	if (btrfs_chunk_writeable(info, cache->start)) {
@@ -2146,7 +2131,6 @@ static int read_one_block_group(struct btrfs_fs_info *info,
 static int fill_dummy_bgs(struct btrfs_fs_info *fs_info)
 {
 	struct extent_map_tree *em_tree = &fs_info->mapping_tree;
-	struct btrfs_space_info *space_info;
 	struct rb_node *node;
 	int ret = 0;
 
@@ -2187,9 +2171,7 @@ static int fill_dummy_bgs(struct btrfs_fs_info *fs_info)
 			break;
 		}
 
-		btrfs_add_bg_to_space_info(fs_info, bg, &space_info);
-		bg->space_info = space_info;
-		link_block_group(bg);
+		btrfs_add_bg_to_space_info(fs_info, bg);
 
 		set_avail_alloc_bits(fs_info, bg->flags);
 	}
@@ -2558,7 +2540,7 @@ struct btrfs_block_group *btrfs_make_block_group(struct btrfs_trans_handle *tran
 	 * the rbtree, update the space info's counters.
 	 */
 	trace_btrfs_add_block_group(fs_info, cache, 1);
-	btrfs_add_bg_to_space_info(fs_info, cache, &cache->space_info);
+	btrfs_add_bg_to_space_info(fs_info, cache);
 	btrfs_update_global_block_rsv(fs_info);
 
 #ifdef CONFIG_BTRFS_DEBUG
@@ -2569,7 +2551,6 @@ struct btrfs_block_group *btrfs_make_block_group(struct btrfs_trans_handle *tran
 		fragment_free_space(cache);
 	}
 #endif
-	link_block_group(cache);
 
 	list_add_tail(&cache->bg_list, &trans->new_bgs);
 	trans->delayed_ref_updates++;
diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
index a9433d19d827..f89aa49f53d4 100644
--- a/fs/btrfs/space-info.c
+++ b/fs/btrfs/space-info.c
@@ -294,11 +294,10 @@ int btrfs_init_space_info(struct btrfs_fs_info *fs_info)
 }
 
 void btrfs_add_bg_to_space_info(struct btrfs_fs_info *info,
-				struct btrfs_block_group *block_group,
-				struct btrfs_space_info **space_info)
+				struct btrfs_block_group *block_group)
 {
 	struct btrfs_space_info *found;
-	int factor;
+	int factor, index;
 
 	factor = btrfs_bg_type_to_factor(block_group->flags);
 
@@ -317,7 +316,13 @@ void btrfs_add_bg_to_space_info(struct btrfs_fs_info *info,
 		found->full = 0;
 	btrfs_try_granting_tickets(info, found);
 	spin_unlock(&found->lock);
-	*space_info = found;
+
+	block_group->space_info = found;
+
+	index = btrfs_bg_flags_to_raid_index(block_group->flags);
+	down_write(&found->groups_sem);
+	list_add_tail(&block_group->list, &found->block_groups[index]);
+	up_write(&found->groups_sem);
 }
 
 struct btrfs_space_info *btrfs_find_space_info(struct btrfs_fs_info *info,
diff --git a/fs/btrfs/space-info.h b/fs/btrfs/space-info.h
index 101e83828ee5..2039096803ed 100644
--- a/fs/btrfs/space-info.h
+++ b/fs/btrfs/space-info.h
@@ -124,8 +124,7 @@ DECLARE_SPACE_INFO_UPDATE(bytes_pinned, "pinned");
 
 int btrfs_init_space_info(struct btrfs_fs_info *fs_info);
 void btrfs_add_bg_to_space_info(struct btrfs_fs_info *info,
-				struct btrfs_block_group *block_group,
-				struct btrfs_space_info **space_info);
+				struct btrfs_block_group *block_group);
 void btrfs_update_space_info_chunk_size(struct btrfs_space_info *space_info,
 					u64 chunk_size);
 struct btrfs_space_info *btrfs_find_space_info(struct btrfs_fs_info *info,
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 4/9] btrfs: convert block group bit field to use bit helpers
  2022-08-05 14:14 [PATCH v3 0/9] btrfs: block group cleanups Josef Bacik
                   ` (2 preceding siblings ...)
  2022-08-05 14:14 ` [PATCH v3 3/9] btrfs: handle space_info setting of bg in btrfs_add_bg_to_space_info Josef Bacik
@ 2022-08-05 14:14 ` Josef Bacik
  2022-08-10  7:42   ` Naohiro Aota
  2022-08-05 14:14 ` [PATCH v3 5/9] btrfs: remove block_group->lock protection for TO_COPY Josef Bacik
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 12+ messages in thread
From: Josef Bacik @ 2022-08-05 14:14 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

We use a bit field in the btrfs_block_group for different flags, however
this is awkward because we have to hold the block_group->lock for any
modification of any of these fields, and makes the code clunky for a few
of these flags.  Convert these to a properly flags setup so we can
utilize the bit helpers.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/block-group.c      | 27 +++++++++++++++++----------
 fs/btrfs/block-group.h      | 20 ++++++++++++--------
 fs/btrfs/dev-replace.c      |  6 +++---
 fs/btrfs/extent-tree.c      |  7 +++++--
 fs/btrfs/free-space-cache.c | 18 +++++++++---------
 fs/btrfs/scrub.c            | 13 +++++++------
 fs/btrfs/space-info.c       |  2 +-
 fs/btrfs/volumes.c          | 11 ++++++-----
 fs/btrfs/zoned.c            | 34 ++++++++++++++++++++++------------
 9 files changed, 82 insertions(+), 56 deletions(-)

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 5f062c5d3b6f..8fd54f4dd2de 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -789,7 +789,7 @@ int btrfs_cache_block_group(struct btrfs_block_group *cache, int load_cache_only
 		cache->cached = BTRFS_CACHE_FAST;
 	else
 		cache->cached = BTRFS_CACHE_STARTED;
-	cache->has_caching_ctl = 1;
+	set_bit(BLOCK_GROUP_FLAG_HAS_CACHING_CTL, &cache->runtime_flags);
 	spin_unlock(&cache->lock);
 
 	write_lock(&fs_info->block_group_cache_lock);
@@ -1005,11 +1005,14 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
 		kobject_put(kobj);
 	}
 
-	if (block_group->has_caching_ctl)
+
+	if (test_bit(BLOCK_GROUP_FLAG_HAS_CACHING_CTL,
+		     &block_group->runtime_flags))
 		caching_ctl = btrfs_get_caching_control(block_group);
 	if (block_group->cached == BTRFS_CACHE_STARTED)
 		btrfs_wait_block_group_cache_done(block_group);
-	if (block_group->has_caching_ctl) {
+	if (test_bit(BLOCK_GROUP_FLAG_HAS_CACHING_CTL,
+		     &block_group->runtime_flags)) {
 		write_lock(&fs_info->block_group_cache_lock);
 		if (!caching_ctl) {
 			struct btrfs_caching_control *ctl;
@@ -1051,12 +1054,13 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
 			< block_group->zone_unusable);
 		WARN_ON(block_group->space_info->disk_total
 			< block_group->length * factor);
-		WARN_ON(block_group->zone_is_active &&
+		WARN_ON(test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
+				 &block_group->runtime_flags) &&
 			block_group->space_info->active_total_bytes
 			< block_group->length);
 	}
 	block_group->space_info->total_bytes -= block_group->length;
-	if (block_group->zone_is_active)
+	if (test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &block_group->runtime_flags))
 		block_group->space_info->active_total_bytes -= block_group->length;
 	block_group->space_info->bytes_readonly -=
 		(block_group->length - block_group->zone_unusable);
@@ -1086,7 +1090,8 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
 		goto out;
 
 	spin_lock(&block_group->lock);
-	block_group->removed = 1;
+	set_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags);
+
 	/*
 	 * At this point trimming or scrub can't start on this block group,
 	 * because we removed the block group from the rbtree
@@ -2426,7 +2431,8 @@ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans)
 		ret = insert_block_group_item(trans, block_group);
 		if (ret)
 			btrfs_abort_transaction(trans, ret);
-		if (!block_group->chunk_item_inserted) {
+		if (!test_bit(BLOCK_GROUP_FLAG_CHUNK_ITEM_INSERTED,
+			      &block_group->runtime_flags)) {
 			mutex_lock(&fs_info->chunk_mutex);
 			ret = btrfs_chunk_alloc_add_chunk_item(trans, block_group);
 			mutex_unlock(&fs_info->chunk_mutex);
@@ -3972,7 +3978,8 @@ void btrfs_put_block_group_cache(struct btrfs_fs_info *info)
 		while (block_group) {
 			btrfs_wait_block_group_cache_done(block_group);
 			spin_lock(&block_group->lock);
-			if (block_group->iref)
+			if (test_bit(BLOCK_GROUP_FLAG_IREF,
+				     &block_group->runtime_flags))
 				break;
 			spin_unlock(&block_group->lock);
 			block_group = btrfs_next_block_group(block_group);
@@ -3985,7 +3992,7 @@ void btrfs_put_block_group_cache(struct btrfs_fs_info *info)
 		}
 
 		inode = block_group->inode;
-		block_group->iref = 0;
+		clear_bit(BLOCK_GROUP_FLAG_IREF, &block_group->runtime_flags);
 		block_group->inode = NULL;
 		spin_unlock(&block_group->lock);
 		ASSERT(block_group->io_ctl.inode == NULL);
@@ -4127,7 +4134,7 @@ void btrfs_unfreeze_block_group(struct btrfs_block_group *block_group)
 
 	spin_lock(&block_group->lock);
 	cleanup = (atomic_dec_and_test(&block_group->frozen) &&
-		   block_group->removed);
+		   test_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags));
 	spin_unlock(&block_group->lock);
 
 	if (cleanup) {
diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
index 35e0e860cc0b..8008a391ed8c 100644
--- a/fs/btrfs/block-group.h
+++ b/fs/btrfs/block-group.h
@@ -46,6 +46,17 @@ enum btrfs_chunk_alloc_enum {
 	CHUNK_ALLOC_FORCE_FOR_EXTENT,
 };
 
+enum btrfs_block_group_flags {
+	BLOCK_GROUP_FLAG_IREF,
+	BLOCK_GROUP_FLAG_HAS_CACHING_CTL,
+	BLOCK_GROUP_FLAG_REMOVED,
+	BLOCK_GROUP_FLAG_TO_COPY,
+	BLOCK_GROUP_FLAG_RELOCATING_REPAIR,
+	BLOCK_GROUP_FLAG_CHUNK_ITEM_INSERTED,
+	BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
+	BLOCK_GROUP_FLAG_ZONED_DATA_RELOC,
+};
+
 struct btrfs_caching_control {
 	struct list_head list;
 	struct mutex mutex;
@@ -95,16 +106,9 @@ struct btrfs_block_group {
 
 	/* For raid56, this is a full stripe, without parity */
 	unsigned long full_stripe_len;
+	unsigned long runtime_flags;
 
 	unsigned int ro;
-	unsigned int iref:1;
-	unsigned int has_caching_ctl:1;
-	unsigned int removed:1;
-	unsigned int to_copy:1;
-	unsigned int relocating_repair:1;
-	unsigned int chunk_item_inserted:1;
-	unsigned int zone_is_active:1;
-	unsigned int zoned_data_reloc_ongoing:1;
 
 	int disk_cache_state;
 
diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
index f43196a893ca..f85bbd99230b 100644
--- a/fs/btrfs/dev-replace.c
+++ b/fs/btrfs/dev-replace.c
@@ -546,7 +546,7 @@ static int mark_block_group_to_copy(struct btrfs_fs_info *fs_info,
 			continue;
 
 		spin_lock(&cache->lock);
-		cache->to_copy = 1;
+		set_bit(BLOCK_GROUP_FLAG_TO_COPY, &cache->runtime_flags);
 		spin_unlock(&cache->lock);
 
 		btrfs_put_block_group(cache);
@@ -577,7 +577,7 @@ bool btrfs_finish_block_group_to_copy(struct btrfs_device *srcdev,
 		return true;
 
 	spin_lock(&cache->lock);
-	if (cache->removed) {
+	if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &cache->runtime_flags)) {
 		spin_unlock(&cache->lock);
 		return true;
 	}
@@ -611,7 +611,7 @@ bool btrfs_finish_block_group_to_copy(struct btrfs_device *srcdev,
 
 	/* Last stripe on this device */
 	spin_lock(&cache->lock);
-	cache->to_copy = 0;
+	clear_bit(BLOCK_GROUP_FLAG_TO_COPY, &cache->runtime_flags);
 	spin_unlock(&cache->lock);
 
 	return true;
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index ea3ec1e761e8..fbf10cd0155e 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -3816,7 +3816,9 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
 	       block_group->start == fs_info->data_reloc_bg ||
 	       fs_info->data_reloc_bg == 0);
 
-	if (block_group->ro || block_group->zoned_data_reloc_ongoing) {
+	if (block_group->ro ||
+	    test_bit(BLOCK_GROUP_FLAG_ZONED_DATA_RELOC,
+		     &block_group->runtime_flags)) {
 		ret = 1;
 		goto out;
 	}
@@ -3893,7 +3895,8 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
 		 * regular extents) at the same time to the same zone, which
 		 * easily break the write pointer.
 		 */
-		block_group->zoned_data_reloc_ongoing = 1;
+		set_bit(BLOCK_GROUP_FLAG_ZONED_DATA_RELOC,
+			&block_group->runtime_flags);
 		fs_info->data_reloc_bg = 0;
 	}
 	spin_unlock(&fs_info->relocation_bg_lock);
diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
index 996da650ecdc..fd73327134ac 100644
--- a/fs/btrfs/free-space-cache.c
+++ b/fs/btrfs/free-space-cache.c
@@ -126,10 +126,9 @@ struct inode *lookup_free_space_inode(struct btrfs_block_group *block_group,
 		block_group->disk_cache_state = BTRFS_DC_CLEAR;
 	}
 
-	if (!block_group->iref) {
+	if (!test_and_set_bit(BLOCK_GROUP_FLAG_IREF,
+			      &block_group->runtime_flags))
 		block_group->inode = igrab(inode);
-		block_group->iref = 1;
-	}
 	spin_unlock(&block_group->lock);
 
 	return inode;
@@ -241,8 +240,8 @@ int btrfs_remove_free_space_inode(struct btrfs_trans_handle *trans,
 	clear_nlink(inode);
 	/* One for the block groups ref */
 	spin_lock(&block_group->lock);
-	if (block_group->iref) {
-		block_group->iref = 0;
+	if (test_and_clear_bit(BLOCK_GROUP_FLAG_IREF,
+			       &block_group->runtime_flags)) {
 		block_group->inode = NULL;
 		spin_unlock(&block_group->lock);
 		iput(inode);
@@ -2860,7 +2859,8 @@ void btrfs_dump_free_space(struct btrfs_block_group *block_group,
 	if (btrfs_is_zoned(fs_info)) {
 		btrfs_info(fs_info, "free space %llu active %d",
 			   block_group->zone_capacity - block_group->alloc_offset,
-			   block_group->zone_is_active);
+			   test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
+				    &block_group->runtime_flags));
 		return;
 	}
 
@@ -3992,7 +3992,7 @@ int btrfs_trim_block_group(struct btrfs_block_group *block_group,
 	*trimmed = 0;
 
 	spin_lock(&block_group->lock);
-	if (block_group->removed) {
+	if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags)) {
 		spin_unlock(&block_group->lock);
 		return 0;
 	}
@@ -4022,7 +4022,7 @@ int btrfs_trim_block_group_extents(struct btrfs_block_group *block_group,
 	*trimmed = 0;
 
 	spin_lock(&block_group->lock);
-	if (block_group->removed) {
+	if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags)) {
 		spin_unlock(&block_group->lock);
 		return 0;
 	}
@@ -4044,7 +4044,7 @@ int btrfs_trim_block_group_bitmaps(struct btrfs_block_group *block_group,
 	*trimmed = 0;
 
 	spin_lock(&block_group->lock);
-	if (block_group->removed) {
+	if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags)) {
 		spin_unlock(&block_group->lock);
 		return 0;
 	}
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 3afe5fa50a63..b7be62f1cd8e 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -3266,7 +3266,7 @@ static int scrub_simple_mirror(struct scrub_ctx *sctx,
 		}
 		/* Block group removed? */
 		spin_lock(&bg->lock);
-		if (bg->removed) {
+		if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &bg->runtime_flags)) {
 			spin_unlock(&bg->lock);
 			ret = 0;
 			break;
@@ -3606,7 +3606,7 @@ static noinline_for_stack int scrub_chunk(struct scrub_ctx *sctx,
 		 * kthread or relocation.
 		 */
 		spin_lock(&bg->lock);
-		if (!bg->removed)
+		if (!test_bit(BLOCK_GROUP_FLAG_REMOVED, &bg->runtime_flags))
 			ret = -EINVAL;
 		spin_unlock(&bg->lock);
 
@@ -3765,7 +3765,8 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
 
 		if (sctx->is_dev_replace && btrfs_is_zoned(fs_info)) {
 			spin_lock(&cache->lock);
-			if (!cache->to_copy) {
+			if (!test_bit(BLOCK_GROUP_FLAG_TO_COPY,
+				      &cache->runtime_flags)) {
 				spin_unlock(&cache->lock);
 				btrfs_put_block_group(cache);
 				goto skip;
@@ -3782,7 +3783,7 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
 		 * repair extents.
 		 */
 		spin_lock(&cache->lock);
-		if (cache->removed) {
+		if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &cache->runtime_flags)) {
 			spin_unlock(&cache->lock);
 			btrfs_put_block_group(cache);
 			goto skip;
@@ -3942,8 +3943,8 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
 		 * balance is triggered or it becomes used and unused again.
 		 */
 		spin_lock(&cache->lock);
-		if (!cache->removed && !cache->ro && cache->reserved == 0 &&
-		    cache->used == 0) {
+		if (!test_bit(BLOCK_GROUP_FLAG_REMOVED, &cache->runtime_flags) &&
+		    !cache->ro && cache->reserved == 0 && cache->used == 0) {
 			spin_unlock(&cache->lock);
 			if (btrfs_test_opt(fs_info, DISCARD_ASYNC))
 				btrfs_discard_queue_work(&fs_info->discard_ctl,
diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
index f89aa49f53d4..477e57ace48d 100644
--- a/fs/btrfs/space-info.c
+++ b/fs/btrfs/space-info.c
@@ -305,7 +305,7 @@ void btrfs_add_bg_to_space_info(struct btrfs_fs_info *info,
 	ASSERT(found);
 	spin_lock(&found->lock);
 	found->total_bytes += block_group->length;
-	if (block_group->zone_is_active)
+	if (test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &block_group->runtime_flags))
 		found->active_total_bytes += block_group->length;
 	found->disk_total += block_group->length * factor;
 	found->bytes_used += block_group->used;
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 22bfc7806ccb..4de09c730d3c 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -5592,7 +5592,7 @@ int btrfs_chunk_alloc_add_chunk_item(struct btrfs_trans_handle *trans,
 	if (ret)
 		goto out;
 
-	bg->chunk_item_inserted = 1;
+	set_bit(BLOCK_GROUP_FLAG_CHUNK_ITEM_INSERTED, &bg->runtime_flags);
 
 	if (map->type & BTRFS_BLOCK_GROUP_SYSTEM) {
 		ret = btrfs_add_system_chunk(fs_info, &key, chunk, item_size);
@@ -6151,7 +6151,7 @@ static bool is_block_group_to_copy(struct btrfs_fs_info *fs_info, u64 logical)
 	cache = btrfs_lookup_block_group(fs_info, logical);
 
 	spin_lock(&cache->lock);
-	ret = cache->to_copy;
+	ret = test_bit(BLOCK_GROUP_FLAG_TO_COPY, &cache->runtime_flags);
 	spin_unlock(&cache->lock);
 
 	btrfs_put_block_group(cache);
@@ -8241,7 +8241,8 @@ static int relocating_repair_kthread(void *data)
 	if (!cache)
 		goto out;
 
-	if (!cache->relocating_repair)
+	if (!test_bit(BLOCK_GROUP_FLAG_RELOCATING_REPAIR,
+		      &cache->runtime_flags))
 		goto out;
 
 	ret = btrfs_may_alloc_data_chunk(fs_info, target);
@@ -8279,12 +8280,12 @@ bool btrfs_repair_one_zone(struct btrfs_fs_info *fs_info, u64 logical)
 		return true;
 
 	spin_lock(&cache->lock);
-	if (cache->relocating_repair) {
+	if (test_and_set_bit(BLOCK_GROUP_FLAG_RELOCATING_REPAIR,
+			     &cache->runtime_flags)) {
 		spin_unlock(&cache->lock);
 		btrfs_put_block_group(cache);
 		return true;
 	}
-	cache->relocating_repair = 1;
 	spin_unlock(&cache->lock);
 
 	kthread_run(relocating_repair_kthread, cache,
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index b150b07ba1a7..dd2704bee6b4 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -1443,7 +1443,9 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
 		}
 		cache->alloc_offset = alloc_offsets[0];
 		cache->zone_capacity = caps[0];
-		cache->zone_is_active = test_bit(0, active);
+		if (test_bit(0, active))
+			set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
+				&cache->runtime_flags);
 		break;
 	case BTRFS_BLOCK_GROUP_DUP:
 		if (map->type & BTRFS_BLOCK_GROUP_DATA) {
@@ -1477,7 +1479,9 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
 				goto out;
 			}
 		} else {
-			cache->zone_is_active = test_bit(0, active);
+			if (test_bit(0, active))
+				set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
+					&cache->runtime_flags);
 		}
 		cache->alloc_offset = alloc_offsets[0];
 		cache->zone_capacity = min(caps[0], caps[1]);
@@ -1495,7 +1499,7 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
 		goto out;
 	}
 
-	if (cache->zone_is_active) {
+	if (test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &cache->runtime_flags)) {
 		btrfs_get_block_group(cache);
 		spin_lock(&fs_info->zone_active_bgs_lock);
 		list_add_tail(&cache->active_bg_list, &fs_info->zone_active_bgs);
@@ -1863,7 +1867,8 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
 
 	spin_lock(&space_info->lock);
 	spin_lock(&block_group->lock);
-	if (block_group->zone_is_active) {
+	if (test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
+		     &block_group->runtime_flags)) {
 		ret = true;
 		goto out_unlock;
 	}
@@ -1889,8 +1894,7 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
 	}
 
 	/* Successfully activated all the zones */
-	block_group->zone_is_active = 1;
-	space_info->active_total_bytes += block_group->length;
+	set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &block_group->runtime_flags);
 	spin_unlock(&block_group->lock);
 	btrfs_try_granting_tickets(fs_info, space_info);
 	spin_unlock(&space_info->lock);
@@ -1918,7 +1922,8 @@ static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_writ
 	int i;
 
 	spin_lock(&block_group->lock);
-	if (!block_group->zone_is_active) {
+	if (!test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
+		      &block_group->runtime_flags)) {
 		spin_unlock(&block_group->lock);
 		return 0;
 	}
@@ -1957,7 +1962,8 @@ static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_writ
 		 * Bail out if someone already deactivated the block group, or
 		 * allocated space is left in the block group.
 		 */
-		if (!block_group->zone_is_active) {
+		if (!test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
+			      &block_group->runtime_flags)) {
 			spin_unlock(&block_group->lock);
 			btrfs_dec_block_group_ro(block_group);
 			return 0;
@@ -1970,7 +1976,7 @@ static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_writ
 		}
 	}
 
-	block_group->zone_is_active = 0;
+	clear_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &block_group->runtime_flags);
 	block_group->alloc_offset = block_group->zone_capacity;
 	block_group->free_space_ctl->free_space = 0;
 	btrfs_clear_treelog_bg(block_group);
@@ -2179,13 +2185,15 @@ void btrfs_zoned_release_data_reloc_bg(struct btrfs_fs_info *fs_info, u64 logica
 	ASSERT(block_group && (block_group->flags & BTRFS_BLOCK_GROUP_DATA));
 
 	spin_lock(&block_group->lock);
-	if (!block_group->zoned_data_reloc_ongoing)
+	if (!test_bit(BLOCK_GROUP_FLAG_ZONED_DATA_RELOC,
+		      &block_group->runtime_flags))
 		goto out;
 
 	/* All relocation extents are written. */
 	if (block_group->start + block_group->alloc_offset == logical + length) {
 		/* Now, release this block group for further allocations. */
-		block_group->zoned_data_reloc_ongoing = 0;
+		clear_bit(BLOCK_GROUP_FLAG_ZONED_DATA_RELOC,
+			  &block_group->runtime_flags);
 	}
 
 out:
@@ -2257,7 +2265,9 @@ int btrfs_zoned_activate_one_bg(struct btrfs_fs_info *fs_info,
 					    list) {
 				if (!spin_trylock(&bg->lock))
 					continue;
-				if (btrfs_zoned_bg_is_full(bg) || bg->zone_is_active) {
+				if (btrfs_zoned_bg_is_full(bg) ||
+				    test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
+					     &bg->runtime_flags)) {
 					spin_unlock(&bg->lock);
 					continue;
 				}
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 5/9] btrfs: remove block_group->lock protection for TO_COPY
  2022-08-05 14:14 [PATCH v3 0/9] btrfs: block group cleanups Josef Bacik
                   ` (3 preceding siblings ...)
  2022-08-05 14:14 ` [PATCH v3 4/9] btrfs: convert block group bit field to use bit helpers Josef Bacik
@ 2022-08-05 14:14 ` Josef Bacik
  2022-08-05 14:14 ` [PATCH v3 6/9] btrfs: simplify btrfs_put_block_group_cache Josef Bacik
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Josef Bacik @ 2022-08-05 14:14 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

We use this during device replace for zoned devices, we were simply
taking the lock because it was in a bit field and we needed the lock to
be safe with other modifications in the bitfield.  With the bit helpers
we no longer require that locking.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/dev-replace.c | 5 -----
 fs/btrfs/scrub.c       | 3 ---
 fs/btrfs/volumes.c     | 2 --
 3 files changed, 10 deletions(-)

diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
index f85bbd99230b..488f2105c5d0 100644
--- a/fs/btrfs/dev-replace.c
+++ b/fs/btrfs/dev-replace.c
@@ -545,10 +545,7 @@ static int mark_block_group_to_copy(struct btrfs_fs_info *fs_info,
 		if (!cache)
 			continue;
 
-		spin_lock(&cache->lock);
 		set_bit(BLOCK_GROUP_FLAG_TO_COPY, &cache->runtime_flags);
-		spin_unlock(&cache->lock);
-
 		btrfs_put_block_group(cache);
 	}
 	if (iter_ret < 0)
@@ -610,9 +607,7 @@ bool btrfs_finish_block_group_to_copy(struct btrfs_device *srcdev,
 	}
 
 	/* Last stripe on this device */
-	spin_lock(&cache->lock);
 	clear_bit(BLOCK_GROUP_FLAG_TO_COPY, &cache->runtime_flags);
-	spin_unlock(&cache->lock);
 
 	return true;
 }
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index b7be62f1cd8e..14af085fe868 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -3764,14 +3764,11 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
 		}
 
 		if (sctx->is_dev_replace && btrfs_is_zoned(fs_info)) {
-			spin_lock(&cache->lock);
 			if (!test_bit(BLOCK_GROUP_FLAG_TO_COPY,
 				      &cache->runtime_flags)) {
-				spin_unlock(&cache->lock);
 				btrfs_put_block_group(cache);
 				goto skip;
 			}
-			spin_unlock(&cache->lock);
 		}
 
 		/*
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 4de09c730d3c..8a6b5f6a8f8c 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -6150,9 +6150,7 @@ static bool is_block_group_to_copy(struct btrfs_fs_info *fs_info, u64 logical)
 
 	cache = btrfs_lookup_block_group(fs_info, logical);
 
-	spin_lock(&cache->lock);
 	ret = test_bit(BLOCK_GROUP_FLAG_TO_COPY, &cache->runtime_flags);
-	spin_unlock(&cache->lock);
 
 	btrfs_put_block_group(cache);
 	return ret;
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 6/9] btrfs: simplify btrfs_put_block_group_cache
  2022-08-05 14:14 [PATCH v3 0/9] btrfs: block group cleanups Josef Bacik
                   ` (4 preceding siblings ...)
  2022-08-05 14:14 ` [PATCH v3 5/9] btrfs: remove block_group->lock protection for TO_COPY Josef Bacik
@ 2022-08-05 14:14 ` Josef Bacik
  2022-08-05 14:14 ` [PATCH v3 7/9] btrfs: remove BLOCK_GROUP_FLAG_HAS_CACHING_CTL Josef Bacik
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Josef Bacik @ 2022-08-05 14:14 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

We're breaking out and re-searching for the next block group while
evicting any of the block group cache inodes.  This is not needed, the
block groups aren't disappearing here, we can simply loop through the
block groups like normal and iput any inode that we find.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/block-group.c | 42 +++++++++++++++---------------------------
 1 file changed, 15 insertions(+), 27 deletions(-)

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 8fd54f4dd2de..b94aa8087d98 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -3969,36 +3969,24 @@ void btrfs_reserve_chunk_metadata(struct btrfs_trans_handle *trans,
 void btrfs_put_block_group_cache(struct btrfs_fs_info *info)
 {
 	struct btrfs_block_group *block_group;
-	u64 last = 0;
 
-	while (1) {
-		struct inode *inode;
-
-		block_group = btrfs_lookup_first_block_group(info, last);
-		while (block_group) {
-			btrfs_wait_block_group_cache_done(block_group);
-			spin_lock(&block_group->lock);
-			if (test_bit(BLOCK_GROUP_FLAG_IREF,
-				     &block_group->runtime_flags))
-				break;
+	block_group = btrfs_lookup_first_block_group(info, 0);
+	while (block_group) {
+		btrfs_wait_block_group_cache_done(block_group);
+		spin_lock(&block_group->lock);
+		if (test_and_clear_bit(BLOCK_GROUP_FLAG_IREF,
+				       &block_group->runtime_flags)) {
+			struct inode *inode = block_group->inode;
+
+			block_group->inode = NULL;
 			spin_unlock(&block_group->lock);
-			block_group = btrfs_next_block_group(block_group);
-		}
-		if (!block_group) {
-			if (last == 0)
-				break;
-			last = 0;
-			continue;
-		}
 
-		inode = block_group->inode;
-		clear_bit(BLOCK_GROUP_FLAG_IREF, &block_group->runtime_flags);
-		block_group->inode = NULL;
-		spin_unlock(&block_group->lock);
-		ASSERT(block_group->io_ctl.inode == NULL);
-		iput(inode);
-		last = block_group->start + block_group->length;
-		btrfs_put_block_group(block_group);
+			ASSERT(block_group->io_ctl.inode == NULL);
+			iput(inode);
+		} else {
+			spin_unlock(&block_group->lock);
+		}
+		block_group = btrfs_next_block_group(block_group);
 	}
 }
 
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 7/9] btrfs: remove BLOCK_GROUP_FLAG_HAS_CACHING_CTL
  2022-08-05 14:14 [PATCH v3 0/9] btrfs: block group cleanups Josef Bacik
                   ` (5 preceding siblings ...)
  2022-08-05 14:14 ` [PATCH v3 6/9] btrfs: simplify btrfs_put_block_group_cache Josef Bacik
@ 2022-08-05 14:14 ` Josef Bacik
  2022-08-05 14:14 ` [PATCH v3 8/9] btrfs: remove bg->lock protection for relocation repair flag Josef Bacik
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Josef Bacik @ 2022-08-05 14:14 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

This is used mostly to determine if we need to look at the caching ctl
list and clean up any references to this block group.  However we never
clear this flag, specifically because we need to know if we have to
remove a caching ctl we have for this block group still.  This is in the
remove block group path which isn't a fast path, so the optimization
doesn't really matter, simplify this logic and remove the flag.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/block-group.c | 46 +++++++++++++++++++-----------------------
 fs/btrfs/block-group.h |  1 -
 2 files changed, 21 insertions(+), 26 deletions(-)

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index b94aa8087d98..6215f50b62d2 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -789,7 +789,6 @@ int btrfs_cache_block_group(struct btrfs_block_group *cache, int load_cache_only
 		cache->cached = BTRFS_CACHE_FAST;
 	else
 		cache->cached = BTRFS_CACHE_STARTED;
-	set_bit(BLOCK_GROUP_FLAG_HAS_CACHING_CTL, &cache->runtime_flags);
 	spin_unlock(&cache->lock);
 
 	write_lock(&fs_info->block_group_cache_lock);
@@ -1006,34 +1005,31 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
 	}
 
 
-	if (test_bit(BLOCK_GROUP_FLAG_HAS_CACHING_CTL,
-		     &block_group->runtime_flags))
-		caching_ctl = btrfs_get_caching_control(block_group);
 	if (block_group->cached == BTRFS_CACHE_STARTED)
 		btrfs_wait_block_group_cache_done(block_group);
-	if (test_bit(BLOCK_GROUP_FLAG_HAS_CACHING_CTL,
-		     &block_group->runtime_flags)) {
-		write_lock(&fs_info->block_group_cache_lock);
-		if (!caching_ctl) {
-			struct btrfs_caching_control *ctl;
-
-			list_for_each_entry(ctl,
-				    &fs_info->caching_block_groups, list)
-				if (ctl->block_group == block_group) {
-					caching_ctl = ctl;
-					refcount_inc(&caching_ctl->count);
-					break;
-				}
-		}
-		if (caching_ctl)
-			list_del_init(&caching_ctl->list);
-		write_unlock(&fs_info->block_group_cache_lock);
-		if (caching_ctl) {
-			/* Once for the caching bgs list and once for us. */
-			btrfs_put_caching_control(caching_ctl);
-			btrfs_put_caching_control(caching_ctl);
+
+	write_lock(&fs_info->block_group_cache_lock);
+	caching_ctl = btrfs_get_caching_control(block_group);
+	if (!caching_ctl) {
+		struct btrfs_caching_control *ctl;
+
+		list_for_each_entry(ctl, &fs_info->caching_block_groups, list) {
+			if (ctl->block_group == block_group) {
+				caching_ctl = ctl;
+				refcount_inc(&caching_ctl->count);
+				break;
+			}
 		}
 	}
+	if (caching_ctl)
+		list_del_init(&caching_ctl->list);
+	write_unlock(&fs_info->block_group_cache_lock);
+
+	if (caching_ctl) {
+		/* Once for the caching bgs list and once for us. */
+		btrfs_put_caching_control(caching_ctl);
+		btrfs_put_caching_control(caching_ctl);
+	}
 
 	spin_lock(&trans->transaction->dirty_bgs_lock);
 	WARN_ON(!list_empty(&block_group->dirty_list));
diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
index 8008a391ed8c..fffcc7789fa7 100644
--- a/fs/btrfs/block-group.h
+++ b/fs/btrfs/block-group.h
@@ -48,7 +48,6 @@ enum btrfs_chunk_alloc_enum {
 
 enum btrfs_block_group_flags {
 	BLOCK_GROUP_FLAG_IREF,
-	BLOCK_GROUP_FLAG_HAS_CACHING_CTL,
 	BLOCK_GROUP_FLAG_REMOVED,
 	BLOCK_GROUP_FLAG_TO_COPY,
 	BLOCK_GROUP_FLAG_RELOCATING_REPAIR,
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 8/9] btrfs: remove bg->lock protection for relocation repair flag
  2022-08-05 14:14 [PATCH v3 0/9] btrfs: block group cleanups Josef Bacik
                   ` (6 preceding siblings ...)
  2022-08-05 14:14 ` [PATCH v3 7/9] btrfs: remove BLOCK_GROUP_FLAG_HAS_CACHING_CTL Josef Bacik
@ 2022-08-05 14:14 ` Josef Bacik
  2022-08-05 14:15 ` [PATCH v3 9/9] btrfs: delete btrfs_wait_space_cache_v1_finished Josef Bacik
  2022-08-05 16:37 ` [PATCH v3 0/9] btrfs: block group cleanups David Sterba
  9 siblings, 0 replies; 12+ messages in thread
From: Josef Bacik @ 2022-08-05 14:14 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

Before when this was modifying the bit field we had to protect it with
the bg->lock, however now we're using bit helpers so we can stop
using the bg->lock.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/volumes.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 8a6b5f6a8f8c..7eebd2c5e5b3 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -8277,14 +8277,11 @@ bool btrfs_repair_one_zone(struct btrfs_fs_info *fs_info, u64 logical)
 	if (!cache)
 		return true;
 
-	spin_lock(&cache->lock);
 	if (test_and_set_bit(BLOCK_GROUP_FLAG_RELOCATING_REPAIR,
 			     &cache->runtime_flags)) {
-		spin_unlock(&cache->lock);
 		btrfs_put_block_group(cache);
 		return true;
 	}
-	spin_unlock(&cache->lock);
 
 	kthread_run(relocating_repair_kthread, cache,
 		    "btrfs-relocating-repair");
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 9/9] btrfs: delete btrfs_wait_space_cache_v1_finished
  2022-08-05 14:14 [PATCH v3 0/9] btrfs: block group cleanups Josef Bacik
                   ` (7 preceding siblings ...)
  2022-08-05 14:14 ` [PATCH v3 8/9] btrfs: remove bg->lock protection for relocation repair flag Josef Bacik
@ 2022-08-05 14:15 ` Josef Bacik
  2022-08-05 16:37 ` [PATCH v3 0/9] btrfs: block group cleanups David Sterba
  9 siblings, 0 replies; 12+ messages in thread
From: Josef Bacik @ 2022-08-05 14:15 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

We used to use this in a few spots, but now we only use it directly
inside of block-group.c, so remove the helper and just open code where
we were using it.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/block-group.c | 8 +-------
 fs/btrfs/block-group.h | 2 --
 2 files changed, 1 insertion(+), 9 deletions(-)

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 6215f50b62d2..8028a4c26b89 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -467,12 +467,6 @@ static bool space_cache_v1_done(struct btrfs_block_group *cache)
 	return ret;
 }
 
-void btrfs_wait_space_cache_v1_finished(struct btrfs_block_group *cache,
-				struct btrfs_caching_control *caching_ctl)
-{
-	wait_event(caching_ctl->wait, space_cache_v1_done(cache));
-}
-
 #ifdef CONFIG_BTRFS_DEBUG
 static void fragment_free_space(struct btrfs_block_group *block_group)
 {
@@ -801,7 +795,7 @@ int btrfs_cache_block_group(struct btrfs_block_group *cache, int load_cache_only
 	btrfs_queue_work(fs_info->caching_workers, &caching_ctl->work);
 out:
 	if (load_cache_only && caching_ctl)
-		btrfs_wait_space_cache_v1_finished(cache, caching_ctl);
+		wait_event(caching_ctl->wait, space_cache_v1_done(cache));
 	if (caching_ctl)
 		btrfs_put_caching_control(caching_ctl);
 
diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
index fffcc7789fa7..96382ca5cbfb 100644
--- a/fs/btrfs/block-group.h
+++ b/fs/btrfs/block-group.h
@@ -310,8 +310,6 @@ void btrfs_reserve_chunk_metadata(struct btrfs_trans_handle *trans,
 u64 btrfs_get_alloc_profile(struct btrfs_fs_info *fs_info, u64 orig_flags);
 void btrfs_put_block_group_cache(struct btrfs_fs_info *info);
 int btrfs_free_block_groups(struct btrfs_fs_info *info);
-void btrfs_wait_space_cache_v1_finished(struct btrfs_block_group *cache,
-				struct btrfs_caching_control *caching_ctl);
 int btrfs_rmap_block(struct btrfs_fs_info *fs_info, u64 chunk_start,
 		       struct block_device *bdev, u64 physical, u64 **logical,
 		       int *naddrs, int *stripe_len);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 0/9] btrfs: block group cleanups
  2022-08-05 14:14 [PATCH v3 0/9] btrfs: block group cleanups Josef Bacik
                   ` (8 preceding siblings ...)
  2022-08-05 14:15 ` [PATCH v3 9/9] btrfs: delete btrfs_wait_space_cache_v1_finished Josef Bacik
@ 2022-08-05 16:37 ` David Sterba
  9 siblings, 0 replies; 12+ messages in thread
From: David Sterba @ 2022-08-05 16:37 UTC (permalink / raw)
  To: Josef Bacik; +Cc: linux-btrfs, kernel-team

On Fri, Aug 05, 2022 at 10:14:51AM -0400, Josef Bacik wrote:
> v2->v3:
> - I removed a check for FS_OPEN in the first patch which was incorrect.

V3 replaced in misc-next, thanks.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 4/9] btrfs: convert block group bit field to use bit helpers
  2022-08-05 14:14 ` [PATCH v3 4/9] btrfs: convert block group bit field to use bit helpers Josef Bacik
@ 2022-08-10  7:42   ` Naohiro Aota
  0 siblings, 0 replies; 12+ messages in thread
From: Naohiro Aota @ 2022-08-10  7:42 UTC (permalink / raw)
  To: Josef Bacik; +Cc: linux-btrfs, kernel-team

On Fri, Aug 05, 2022 at 10:14:55AM -0400, Josef Bacik wrote:
> We use a bit field in the btrfs_block_group for different flags, however
> this is awkward because we have to hold the block_group->lock for any
> modification of any of these fields, and makes the code clunky for a few
> of these flags.  Convert these to a properly flags setup so we can
> utilize the bit helpers.
> 
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> ---
>  fs/btrfs/block-group.c      | 27 +++++++++++++++++----------
>  fs/btrfs/block-group.h      | 20 ++++++++++++--------
>  fs/btrfs/dev-replace.c      |  6 +++---
>  fs/btrfs/extent-tree.c      |  7 +++++--
>  fs/btrfs/free-space-cache.c | 18 +++++++++---------
>  fs/btrfs/scrub.c            | 13 +++++++------
>  fs/btrfs/space-info.c       |  2 +-
>  fs/btrfs/volumes.c          | 11 ++++++-----
>  fs/btrfs/zoned.c            | 34 ++++++++++++++++++++++------------
>  9 files changed, 82 insertions(+), 56 deletions(-)
> 
> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
> index 5f062c5d3b6f..8fd54f4dd2de 100644
> --- a/fs/btrfs/block-group.c
> +++ b/fs/btrfs/block-group.c
> @@ -789,7 +789,7 @@ int btrfs_cache_block_group(struct btrfs_block_group *cache, int load_cache_only
>  		cache->cached = BTRFS_CACHE_FAST;
>  	else
>  		cache->cached = BTRFS_CACHE_STARTED;
> -	cache->has_caching_ctl = 1;
> +	set_bit(BLOCK_GROUP_FLAG_HAS_CACHING_CTL, &cache->runtime_flags);
>  	spin_unlock(&cache->lock);
>  
>  	write_lock(&fs_info->block_group_cache_lock);
> @@ -1005,11 +1005,14 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
>  		kobject_put(kobj);
>  	}
>  
> -	if (block_group->has_caching_ctl)
> +
> +	if (test_bit(BLOCK_GROUP_FLAG_HAS_CACHING_CTL,
> +		     &block_group->runtime_flags))
>  		caching_ctl = btrfs_get_caching_control(block_group);
>  	if (block_group->cached == BTRFS_CACHE_STARTED)
>  		btrfs_wait_block_group_cache_done(block_group);
> -	if (block_group->has_caching_ctl) {
> +	if (test_bit(BLOCK_GROUP_FLAG_HAS_CACHING_CTL,
> +		     &block_group->runtime_flags)) {
>  		write_lock(&fs_info->block_group_cache_lock);
>  		if (!caching_ctl) {
>  			struct btrfs_caching_control *ctl;
> @@ -1051,12 +1054,13 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
>  			< block_group->zone_unusable);
>  		WARN_ON(block_group->space_info->disk_total
>  			< block_group->length * factor);
> -		WARN_ON(block_group->zone_is_active &&
> +		WARN_ON(test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
> +				 &block_group->runtime_flags) &&
>  			block_group->space_info->active_total_bytes
>  			< block_group->length);
>  	}
>  	block_group->space_info->total_bytes -= block_group->length;
> -	if (block_group->zone_is_active)
> +	if (test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &block_group->runtime_flags))
>  		block_group->space_info->active_total_bytes -= block_group->length;
>  	block_group->space_info->bytes_readonly -=
>  		(block_group->length - block_group->zone_unusable);
> @@ -1086,7 +1090,8 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
>  		goto out;
>  
>  	spin_lock(&block_group->lock);
> -	block_group->removed = 1;
> +	set_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags);
> +
>  	/*
>  	 * At this point trimming or scrub can't start on this block group,
>  	 * because we removed the block group from the rbtree
> @@ -2426,7 +2431,8 @@ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans)
>  		ret = insert_block_group_item(trans, block_group);
>  		if (ret)
>  			btrfs_abort_transaction(trans, ret);
> -		if (!block_group->chunk_item_inserted) {
> +		if (!test_bit(BLOCK_GROUP_FLAG_CHUNK_ITEM_INSERTED,
> +			      &block_group->runtime_flags)) {
>  			mutex_lock(&fs_info->chunk_mutex);
>  			ret = btrfs_chunk_alloc_add_chunk_item(trans, block_group);
>  			mutex_unlock(&fs_info->chunk_mutex);
> @@ -3972,7 +3978,8 @@ void btrfs_put_block_group_cache(struct btrfs_fs_info *info)
>  		while (block_group) {
>  			btrfs_wait_block_group_cache_done(block_group);
>  			spin_lock(&block_group->lock);
> -			if (block_group->iref)
> +			if (test_bit(BLOCK_GROUP_FLAG_IREF,
> +				     &block_group->runtime_flags))
>  				break;
>  			spin_unlock(&block_group->lock);
>  			block_group = btrfs_next_block_group(block_group);
> @@ -3985,7 +3992,7 @@ void btrfs_put_block_group_cache(struct btrfs_fs_info *info)
>  		}
>  
>  		inode = block_group->inode;
> -		block_group->iref = 0;
> +		clear_bit(BLOCK_GROUP_FLAG_IREF, &block_group->runtime_flags);
>  		block_group->inode = NULL;
>  		spin_unlock(&block_group->lock);
>  		ASSERT(block_group->io_ctl.inode == NULL);
> @@ -4127,7 +4134,7 @@ void btrfs_unfreeze_block_group(struct btrfs_block_group *block_group)
>  
>  	spin_lock(&block_group->lock);
>  	cleanup = (atomic_dec_and_test(&block_group->frozen) &&
> -		   block_group->removed);
> +		   test_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags));
>  	spin_unlock(&block_group->lock);
>  
>  	if (cleanup) {
> diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
> index 35e0e860cc0b..8008a391ed8c 100644
> --- a/fs/btrfs/block-group.h
> +++ b/fs/btrfs/block-group.h
> @@ -46,6 +46,17 @@ enum btrfs_chunk_alloc_enum {
>  	CHUNK_ALLOC_FORCE_FOR_EXTENT,
>  };
>  
> +enum btrfs_block_group_flags {
> +	BLOCK_GROUP_FLAG_IREF,
> +	BLOCK_GROUP_FLAG_HAS_CACHING_CTL,
> +	BLOCK_GROUP_FLAG_REMOVED,
> +	BLOCK_GROUP_FLAG_TO_COPY,
> +	BLOCK_GROUP_FLAG_RELOCATING_REPAIR,
> +	BLOCK_GROUP_FLAG_CHUNK_ITEM_INSERTED,
> +	BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
> +	BLOCK_GROUP_FLAG_ZONED_DATA_RELOC,
> +};
> +
>  struct btrfs_caching_control {
>  	struct list_head list;
>  	struct mutex mutex;
> @@ -95,16 +106,9 @@ struct btrfs_block_group {
>  
>  	/* For raid56, this is a full stripe, without parity */
>  	unsigned long full_stripe_len;
> +	unsigned long runtime_flags;
>  
>  	unsigned int ro;
> -	unsigned int iref:1;
> -	unsigned int has_caching_ctl:1;
> -	unsigned int removed:1;
> -	unsigned int to_copy:1;
> -	unsigned int relocating_repair:1;
> -	unsigned int chunk_item_inserted:1;
> -	unsigned int zone_is_active:1;
> -	unsigned int zoned_data_reloc_ongoing:1;
>  
>  	int disk_cache_state;
>  
> diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
> index f43196a893ca..f85bbd99230b 100644
> --- a/fs/btrfs/dev-replace.c
> +++ b/fs/btrfs/dev-replace.c
> @@ -546,7 +546,7 @@ static int mark_block_group_to_copy(struct btrfs_fs_info *fs_info,
>  			continue;
>  
>  		spin_lock(&cache->lock);
> -		cache->to_copy = 1;
> +		set_bit(BLOCK_GROUP_FLAG_TO_COPY, &cache->runtime_flags);
>  		spin_unlock(&cache->lock);
>  
>  		btrfs_put_block_group(cache);
> @@ -577,7 +577,7 @@ bool btrfs_finish_block_group_to_copy(struct btrfs_device *srcdev,
>  		return true;
>  
>  	spin_lock(&cache->lock);
> -	if (cache->removed) {
> +	if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &cache->runtime_flags)) {
>  		spin_unlock(&cache->lock);
>  		return true;
>  	}
> @@ -611,7 +611,7 @@ bool btrfs_finish_block_group_to_copy(struct btrfs_device *srcdev,
>  
>  	/* Last stripe on this device */
>  	spin_lock(&cache->lock);
> -	cache->to_copy = 0;
> +	clear_bit(BLOCK_GROUP_FLAG_TO_COPY, &cache->runtime_flags);
>  	spin_unlock(&cache->lock);
>  
>  	return true;
> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> index ea3ec1e761e8..fbf10cd0155e 100644
> --- a/fs/btrfs/extent-tree.c
> +++ b/fs/btrfs/extent-tree.c
> @@ -3816,7 +3816,9 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
>  	       block_group->start == fs_info->data_reloc_bg ||
>  	       fs_info->data_reloc_bg == 0);
>  
> -	if (block_group->ro || block_group->zoned_data_reloc_ongoing) {
> +	if (block_group->ro ||
> +	    test_bit(BLOCK_GROUP_FLAG_ZONED_DATA_RELOC,
> +		     &block_group->runtime_flags)) {
>  		ret = 1;
>  		goto out;
>  	}
> @@ -3893,7 +3895,8 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
>  		 * regular extents) at the same time to the same zone, which
>  		 * easily break the write pointer.
>  		 */
> -		block_group->zoned_data_reloc_ongoing = 1;
> +		set_bit(BLOCK_GROUP_FLAG_ZONED_DATA_RELOC,
> +			&block_group->runtime_flags);
>  		fs_info->data_reloc_bg = 0;
>  	}
>  	spin_unlock(&fs_info->relocation_bg_lock);
> diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
> index 996da650ecdc..fd73327134ac 100644
> --- a/fs/btrfs/free-space-cache.c
> +++ b/fs/btrfs/free-space-cache.c
> @@ -126,10 +126,9 @@ struct inode *lookup_free_space_inode(struct btrfs_block_group *block_group,
>  		block_group->disk_cache_state = BTRFS_DC_CLEAR;
>  	}
>  
> -	if (!block_group->iref) {
> +	if (!test_and_set_bit(BLOCK_GROUP_FLAG_IREF,
> +			      &block_group->runtime_flags))
>  		block_group->inode = igrab(inode);
> -		block_group->iref = 1;
> -	}
>  	spin_unlock(&block_group->lock);
>  
>  	return inode;
> @@ -241,8 +240,8 @@ int btrfs_remove_free_space_inode(struct btrfs_trans_handle *trans,
>  	clear_nlink(inode);
>  	/* One for the block groups ref */
>  	spin_lock(&block_group->lock);
> -	if (block_group->iref) {
> -		block_group->iref = 0;
> +	if (test_and_clear_bit(BLOCK_GROUP_FLAG_IREF,
> +			       &block_group->runtime_flags)) {
>  		block_group->inode = NULL;
>  		spin_unlock(&block_group->lock);
>  		iput(inode);
> @@ -2860,7 +2859,8 @@ void btrfs_dump_free_space(struct btrfs_block_group *block_group,
>  	if (btrfs_is_zoned(fs_info)) {
>  		btrfs_info(fs_info, "free space %llu active %d",
>  			   block_group->zone_capacity - block_group->alloc_offset,
> -			   block_group->zone_is_active);
> +			   test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
> +				    &block_group->runtime_flags));
>  		return;
>  	}
>  
> @@ -3992,7 +3992,7 @@ int btrfs_trim_block_group(struct btrfs_block_group *block_group,
>  	*trimmed = 0;
>  
>  	spin_lock(&block_group->lock);
> -	if (block_group->removed) {
> +	if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags)) {
>  		spin_unlock(&block_group->lock);
>  		return 0;
>  	}
> @@ -4022,7 +4022,7 @@ int btrfs_trim_block_group_extents(struct btrfs_block_group *block_group,
>  	*trimmed = 0;
>  
>  	spin_lock(&block_group->lock);
> -	if (block_group->removed) {
> +	if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags)) {
>  		spin_unlock(&block_group->lock);
>  		return 0;
>  	}
> @@ -4044,7 +4044,7 @@ int btrfs_trim_block_group_bitmaps(struct btrfs_block_group *block_group,
>  	*trimmed = 0;
>  
>  	spin_lock(&block_group->lock);
> -	if (block_group->removed) {
> +	if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags)) {
>  		spin_unlock(&block_group->lock);
>  		return 0;
>  	}
> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
> index 3afe5fa50a63..b7be62f1cd8e 100644
> --- a/fs/btrfs/scrub.c
> +++ b/fs/btrfs/scrub.c
> @@ -3266,7 +3266,7 @@ static int scrub_simple_mirror(struct scrub_ctx *sctx,
>  		}
>  		/* Block group removed? */
>  		spin_lock(&bg->lock);
> -		if (bg->removed) {
> +		if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &bg->runtime_flags)) {
>  			spin_unlock(&bg->lock);
>  			ret = 0;
>  			break;
> @@ -3606,7 +3606,7 @@ static noinline_for_stack int scrub_chunk(struct scrub_ctx *sctx,
>  		 * kthread or relocation.
>  		 */
>  		spin_lock(&bg->lock);
> -		if (!bg->removed)
> +		if (!test_bit(BLOCK_GROUP_FLAG_REMOVED, &bg->runtime_flags))
>  			ret = -EINVAL;
>  		spin_unlock(&bg->lock);
>  
> @@ -3765,7 +3765,8 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
>  
>  		if (sctx->is_dev_replace && btrfs_is_zoned(fs_info)) {
>  			spin_lock(&cache->lock);
> -			if (!cache->to_copy) {
> +			if (!test_bit(BLOCK_GROUP_FLAG_TO_COPY,
> +				      &cache->runtime_flags)) {
>  				spin_unlock(&cache->lock);
>  				btrfs_put_block_group(cache);
>  				goto skip;
> @@ -3782,7 +3783,7 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
>  		 * repair extents.
>  		 */
>  		spin_lock(&cache->lock);
> -		if (cache->removed) {
> +		if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &cache->runtime_flags)) {
>  			spin_unlock(&cache->lock);
>  			btrfs_put_block_group(cache);
>  			goto skip;
> @@ -3942,8 +3943,8 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
>  		 * balance is triggered or it becomes used and unused again.
>  		 */
>  		spin_lock(&cache->lock);
> -		if (!cache->removed && !cache->ro && cache->reserved == 0 &&
> -		    cache->used == 0) {
> +		if (!test_bit(BLOCK_GROUP_FLAG_REMOVED, &cache->runtime_flags) &&
> +		    !cache->ro && cache->reserved == 0 && cache->used == 0) {
>  			spin_unlock(&cache->lock);
>  			if (btrfs_test_opt(fs_info, DISCARD_ASYNC))
>  				btrfs_discard_queue_work(&fs_info->discard_ctl,
> diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
> index f89aa49f53d4..477e57ace48d 100644
> --- a/fs/btrfs/space-info.c
> +++ b/fs/btrfs/space-info.c
> @@ -305,7 +305,7 @@ void btrfs_add_bg_to_space_info(struct btrfs_fs_info *info,
>  	ASSERT(found);
>  	spin_lock(&found->lock);
>  	found->total_bytes += block_group->length;
> -	if (block_group->zone_is_active)
> +	if (test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &block_group->runtime_flags))
>  		found->active_total_bytes += block_group->length;
>  	found->disk_total += block_group->length * factor;
>  	found->bytes_used += block_group->used;
> diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
> index 22bfc7806ccb..4de09c730d3c 100644
> --- a/fs/btrfs/volumes.c
> +++ b/fs/btrfs/volumes.c
> @@ -5592,7 +5592,7 @@ int btrfs_chunk_alloc_add_chunk_item(struct btrfs_trans_handle *trans,
>  	if (ret)
>  		goto out;
>  
> -	bg->chunk_item_inserted = 1;
> +	set_bit(BLOCK_GROUP_FLAG_CHUNK_ITEM_INSERTED, &bg->runtime_flags);
>  
>  	if (map->type & BTRFS_BLOCK_GROUP_SYSTEM) {
>  		ret = btrfs_add_system_chunk(fs_info, &key, chunk, item_size);
> @@ -6151,7 +6151,7 @@ static bool is_block_group_to_copy(struct btrfs_fs_info *fs_info, u64 logical)
>  	cache = btrfs_lookup_block_group(fs_info, logical);
>  
>  	spin_lock(&cache->lock);
> -	ret = cache->to_copy;
> +	ret = test_bit(BLOCK_GROUP_FLAG_TO_COPY, &cache->runtime_flags);
>  	spin_unlock(&cache->lock);
>  
>  	btrfs_put_block_group(cache);
> @@ -8241,7 +8241,8 @@ static int relocating_repair_kthread(void *data)
>  	if (!cache)
>  		goto out;
>  
> -	if (!cache->relocating_repair)
> +	if (!test_bit(BLOCK_GROUP_FLAG_RELOCATING_REPAIR,
> +		      &cache->runtime_flags))
>  		goto out;
>  
>  	ret = btrfs_may_alloc_data_chunk(fs_info, target);
> @@ -8279,12 +8280,12 @@ bool btrfs_repair_one_zone(struct btrfs_fs_info *fs_info, u64 logical)
>  		return true;
>  
>  	spin_lock(&cache->lock);
> -	if (cache->relocating_repair) {
> +	if (test_and_set_bit(BLOCK_GROUP_FLAG_RELOCATING_REPAIR,
> +			     &cache->runtime_flags)) {
>  		spin_unlock(&cache->lock);
>  		btrfs_put_block_group(cache);
>  		return true;
>  	}
> -	cache->relocating_repair = 1;
>  	spin_unlock(&cache->lock);
>  
>  	kthread_run(relocating_repair_kthread, cache,
> diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
> index b150b07ba1a7..dd2704bee6b4 100644
> --- a/fs/btrfs/zoned.c
> +++ b/fs/btrfs/zoned.c
> @@ -1443,7 +1443,9 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
>  		}
>  		cache->alloc_offset = alloc_offsets[0];
>  		cache->zone_capacity = caps[0];
> -		cache->zone_is_active = test_bit(0, active);
> +		if (test_bit(0, active))
> +			set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
> +				&cache->runtime_flags);
>  		break;
>  	case BTRFS_BLOCK_GROUP_DUP:
>  		if (map->type & BTRFS_BLOCK_GROUP_DATA) {
> @@ -1477,7 +1479,9 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
>  				goto out;
>  			}
>  		} else {
> -			cache->zone_is_active = test_bit(0, active);
> +			if (test_bit(0, active))
> +				set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
> +					&cache->runtime_flags);
>  		}
>  		cache->alloc_offset = alloc_offsets[0];
>  		cache->zone_capacity = min(caps[0], caps[1]);
> @@ -1495,7 +1499,7 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
>  		goto out;
>  	}
>  
> -	if (cache->zone_is_active) {
> +	if (test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &cache->runtime_flags)) {
>  		btrfs_get_block_group(cache);
>  		spin_lock(&fs_info->zone_active_bgs_lock);
>  		list_add_tail(&cache->active_bg_list, &fs_info->zone_active_bgs);
> @@ -1863,7 +1867,8 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
>  
>  	spin_lock(&space_info->lock);
>  	spin_lock(&block_group->lock);
> -	if (block_group->zone_is_active) {
> +	if (test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
> +		     &block_group->runtime_flags)) {
>  		ret = true;
>  		goto out_unlock;
>  	}
> @@ -1889,8 +1894,7 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
>  	}
>  
>  	/* Successfully activated all the zones */
> -	block_group->zone_is_active = 1;
> -	space_info->active_total_bytes += block_group->length;
> +	set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &block_group->runtime_flags);

Here, the adding of active_total_bytes is removed maybe by mistake.

Should I send a patch to revert this line? Or, David, could you fold the
fix in the misc-next branch?

Thanks,

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2022-08-10  7:43 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-05 14:14 [PATCH v3 0/9] btrfs: block group cleanups Josef Bacik
2022-08-05 14:14 ` [PATCH v3 1/9] btrfs: use btrfs_fs_closing for background bg work Josef Bacik
2022-08-05 14:14 ` [PATCH v3 2/9] btrfs: simplify btrfs_update_space_info Josef Bacik
2022-08-05 14:14 ` [PATCH v3 3/9] btrfs: handle space_info setting of bg in btrfs_add_bg_to_space_info Josef Bacik
2022-08-05 14:14 ` [PATCH v3 4/9] btrfs: convert block group bit field to use bit helpers Josef Bacik
2022-08-10  7:42   ` Naohiro Aota
2022-08-05 14:14 ` [PATCH v3 5/9] btrfs: remove block_group->lock protection for TO_COPY Josef Bacik
2022-08-05 14:14 ` [PATCH v3 6/9] btrfs: simplify btrfs_put_block_group_cache Josef Bacik
2022-08-05 14:14 ` [PATCH v3 7/9] btrfs: remove BLOCK_GROUP_FLAG_HAS_CACHING_CTL Josef Bacik
2022-08-05 14:14 ` [PATCH v3 8/9] btrfs: remove bg->lock protection for relocation repair flag Josef Bacik
2022-08-05 14:15 ` [PATCH v3 9/9] btrfs: delete btrfs_wait_space_cache_v1_finished Josef Bacik
2022-08-05 16:37 ` [PATCH v3 0/9] btrfs: block group cleanups David Sterba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).