All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/10] btrfs: zoned: write-time activation of metadata block group
@ 2023-07-31 17:17 Naohiro Aota
  2023-07-31 17:17 ` [PATCH v2 01/10] btrfs: introduce struct to consolidate extent buffer write context Naohiro Aota
                   ` (9 more replies)
  0 siblings, 10 replies; 26+ messages in thread
From: Naohiro Aota @ 2023-07-31 17:17 UTC (permalink / raw)
  To: linux-btrfs; +Cc: hch, josef, dsterba, Naohiro Aota

In the current implementation, block groups are activated at
reservation time to ensure that all reserved bytes can be written to
an active metadata block group. However, this approach has proven to
be less efficient, as it activates block groups more frequently than
necessary, putting pressure on the active zone resource and leading to
potential issues such as early ENOSPC or hung_task.

Another drawback of the current method is that it hampers metadata
over-commit, and necessitates additional flush operations and block
group allocations, resulting in decreased overall performance.

Actually, we don't need so many active metadata block groups because
there is only one sequential metadata write stream.

So, this series introduces a write-time activation of metadata and
system block group. This involves reserving at least one active block
group specifically for a metadata and system block group. When the
write goes into a new block group, it should have allocated all the
regions in the current active block group. So, we can wait for IOs to
fill the space, and then switch to a new block group.

Switching to the write-time activation solves the above issue and will
lead to better performance.

* Performance

There is a significant difference with a workload (buffered write without
sync) because we re-enable metadata over-commit.

before the patch:  741.00 MB/sec
after the patch:  1430.27 MB/sec (+ 93%)

* Organization

Patches 1-5 are preparation patches involves meta_write_pointer check.

Patches 6 and 7 are the main part of this series, implementing the
write-time activation.

Patches 8-10 addresses code for reserve time activation: counting fresh
block group as zone_unusable, activating a block group on allocation,
and disabling metadata over-commit.

* Changes

- v2
  - Introduce a struct to consolidate extent buffer write context
    (btrfs_eb_write_context)
  - Change return type of btrfs_check_meta_write_pointer to int
  - Calculate the reservation count only when it sees DUP BG
  - Drop unnecessary BG lock

Naohiro Aota (10):
  btrfs: introduce struct to consolidate extent buffer write context
  btrfs: zoned: introduce block_group context to btrfs_eb_write_context
  btrfs: zoned: return int from btrfs_check_meta_write_pointer
  btrfs: zoned: defer advancing meta_write_pointer
  btrfs: zoned: update meta_write_pointer on zone finish
  btrfs: zoned: reserve zones for an active metadata/system block group
  btrfs: zoned: activate metadata block group on write time
  btrfs: zoned: no longer count fresh BG region as zone unusable
  btrfs: zoned: don't activate non-DATA BG on allocation
  btrfs: zoned: re-enable metadata over-commit for zoned mode

 fs/btrfs/block-group.c      |  13 ++-
 fs/btrfs/extent-tree.c      |   8 +-
 fs/btrfs/extent_io.c        |  48 +++++----
 fs/btrfs/extent_io.h        |   6 ++
 fs/btrfs/free-space-cache.c |   8 +-
 fs/btrfs/fs.h               |   9 ++
 fs/btrfs/space-info.c       |  34 +-----
 fs/btrfs/zoned.c            | 201 +++++++++++++++++++++++++++---------
 fs/btrfs/zoned.h            |  20 +---
 9 files changed, 216 insertions(+), 131 deletions(-)

-- 
2.41.0


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 01/10] btrfs: introduce struct to consolidate extent buffer write context
  2023-07-31 17:17 [PATCH v2 00/10] btrfs: zoned: write-time activation of metadata block group Naohiro Aota
@ 2023-07-31 17:17 ` Naohiro Aota
  2023-08-01  7:53   ` Christoph Hellwig
  2023-08-01 11:59   ` Johannes Thumshirn
  2023-07-31 17:17 ` [PATCH v2 02/10] btrfs: zoned: introduce block_group context to btrfs_eb_write_context Naohiro Aota
                   ` (8 subsequent siblings)
  9 siblings, 2 replies; 26+ messages in thread
From: Naohiro Aota @ 2023-07-31 17:17 UTC (permalink / raw)
  To: linux-btrfs; +Cc: hch, josef, dsterba, Naohiro Aota

Introduce btrfs_eb_write_context to consolidate writeback_control and the
exntent buffer context.

This will help adding a block group context as well.

While at it, move the eb context setting before
btrfs_check_meta_write_pointer(). We can set it here because we anyway need
to skip pages in the same eb if that eb is rejected by
btrfs_check_meta_write_pointer().

Suggested-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
 fs/btrfs/extent_io.c | 17 ++++++++++-------
 fs/btrfs/extent_io.h |  5 +++++
 2 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 177d65d51447..40633bc15c97 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1784,9 +1784,9 @@ static int submit_eb_subpage(struct page *page, struct writeback_control *wbc)
  * previous call.
  * Return <0 for fatal error.
  */
-static int submit_eb_page(struct page *page, struct writeback_control *wbc,
-			  struct extent_buffer **eb_context)
+static int submit_eb_page(struct page *page, struct btrfs_eb_write_context *ctx)
 {
+	struct writeback_control *wbc = ctx->wbc;
 	struct address_space *mapping = page->mapping;
 	struct btrfs_block_group *cache = NULL;
 	struct extent_buffer *eb;
@@ -1815,7 +1815,7 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc,
 		return 0;
 	}
 
-	if (eb == *eb_context) {
+	if (eb == ctx->eb) {
 		spin_unlock(&mapping->private_lock);
 		return 0;
 	}
@@ -1824,6 +1824,8 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc,
 	if (!ret)
 		return 0;
 
+	ctx->eb = eb;
+
 	if (!btrfs_check_meta_write_pointer(eb->fs_info, eb, &cache)) {
 		/*
 		 * If for_sync, this hole will be filled with
@@ -1837,8 +1839,6 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc,
 		return ret;
 	}
 
-	*eb_context = eb;
-
 	if (!lock_extent_buffer_for_io(eb, wbc)) {
 		btrfs_revert_meta_write_pointer(cache, eb);
 		if (cache)
@@ -1861,7 +1861,10 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc,
 int btree_write_cache_pages(struct address_space *mapping,
 				   struct writeback_control *wbc)
 {
-	struct extent_buffer *eb_context = NULL;
+	struct btrfs_eb_write_context ctx = {
+		.wbc = wbc,
+		.eb = NULL,
+	};
 	struct btrfs_fs_info *fs_info = BTRFS_I(mapping->host)->root->fs_info;
 	int ret = 0;
 	int done = 0;
@@ -1903,7 +1906,7 @@ int btree_write_cache_pages(struct address_space *mapping,
 		for (i = 0; i < nr_folios; i++) {
 			struct folio *folio = fbatch.folios[i];
 
-			ret = submit_eb_page(&folio->page, wbc, &eb_context);
+			ret = submit_eb_page(&folio->page, &ctx);
 			if (ret == 0)
 				continue;
 			if (ret < 0) {
diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
index adda14c1b763..e243a8eac910 100644
--- a/fs/btrfs/extent_io.h
+++ b/fs/btrfs/extent_io.h
@@ -93,6 +93,11 @@ struct extent_buffer {
 #endif
 };
 
+struct btrfs_eb_write_context {
+	struct writeback_control *wbc;
+	struct extent_buffer *eb;
+};
+
 /*
  * Get the correct offset inside the page of extent buffer.
  *
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 02/10] btrfs: zoned: introduce block_group context to btrfs_eb_write_context
  2023-07-31 17:17 [PATCH v2 00/10] btrfs: zoned: write-time activation of metadata block group Naohiro Aota
  2023-07-31 17:17 ` [PATCH v2 01/10] btrfs: introduce struct to consolidate extent buffer write context Naohiro Aota
@ 2023-07-31 17:17 ` Naohiro Aota
  2023-08-01  7:55   ` Christoph Hellwig
  2023-08-01 12:05   ` Johannes Thumshirn
  2023-07-31 17:17 ` [PATCH v2 03/10] btrfs: zoned: return int from btrfs_check_meta_write_pointer Naohiro Aota
                   ` (7 subsequent siblings)
  9 siblings, 2 replies; 26+ messages in thread
From: Naohiro Aota @ 2023-07-31 17:17 UTC (permalink / raw)
  To: linux-btrfs; +Cc: hch, josef, dsterba, Naohiro Aota

For metadata write out on the zoned mode, we call
btrfs_check_meta_write_pointer() to check if an extent buffer to be written
is aligned to the write pointer.

We lookup for a block group containing the extent buffer for every extent
buffer, which take unnecessary effort as the writing extent buffers are
mostly contiguous.

Introduce "block_group" to cache the block group working on.

Also, while at it, rename "cache" to "block_group".

Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
 fs/btrfs/extent_io.c | 16 ++++++++--------
 fs/btrfs/extent_io.h |  1 +
 fs/btrfs/zoned.c     | 35 ++++++++++++++++++++---------------
 fs/btrfs/zoned.h     |  6 ++----
 4 files changed, 31 insertions(+), 27 deletions(-)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 40633bc15c97..da8d9478972c 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1788,7 +1788,6 @@ static int submit_eb_page(struct page *page, struct btrfs_eb_write_context *ctx)
 {
 	struct writeback_control *wbc = ctx->wbc;
 	struct address_space *mapping = page->mapping;
-	struct btrfs_block_group *cache = NULL;
 	struct extent_buffer *eb;
 	int ret;
 
@@ -1826,7 +1825,7 @@ static int submit_eb_page(struct page *page, struct btrfs_eb_write_context *ctx)
 
 	ctx->eb = eb;
 
-	if (!btrfs_check_meta_write_pointer(eb->fs_info, eb, &cache)) {
+	if (!btrfs_check_meta_write_pointer(eb->fs_info, ctx)) {
 		/*
 		 * If for_sync, this hole will be filled with
 		 * trasnsaction commit.
@@ -1840,18 +1839,15 @@ static int submit_eb_page(struct page *page, struct btrfs_eb_write_context *ctx)
 	}
 
 	if (!lock_extent_buffer_for_io(eb, wbc)) {
-		btrfs_revert_meta_write_pointer(cache, eb);
-		if (cache)
-			btrfs_put_block_group(cache);
+		btrfs_revert_meta_write_pointer(ctx->block_group, eb);
 		free_extent_buffer(eb);
 		return 0;
 	}
-	if (cache) {
+	if (ctx->block_group) {
 		/*
 		 * Implies write in zoned mode. Mark the last eb in a block group.
 		 */
-		btrfs_schedule_zone_finish_bg(cache, eb);
-		btrfs_put_block_group(cache);
+		btrfs_schedule_zone_finish_bg(ctx->block_group, eb);
 	}
 	write_one_eb(eb, wbc);
 	free_extent_buffer(eb);
@@ -1864,6 +1860,7 @@ int btree_write_cache_pages(struct address_space *mapping,
 	struct btrfs_eb_write_context ctx = {
 		.wbc = wbc,
 		.eb = NULL,
+		.block_group = NULL,
 	};
 	struct btrfs_fs_info *fs_info = BTRFS_I(mapping->host)->root->fs_info;
 	int ret = 0;
@@ -1967,6 +1964,9 @@ int btree_write_cache_pages(struct address_space *mapping,
 		ret = 0;
 	if (!ret && BTRFS_FS_ERROR(fs_info))
 		ret = -EROFS;
+
+	if (ctx.block_group)
+		btrfs_put_block_group(ctx.block_group);
 	btrfs_zoned_meta_io_unlock(fs_info);
 	return ret;
 }
diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
index e243a8eac910..d616d30ed4bd 100644
--- a/fs/btrfs/extent_io.h
+++ b/fs/btrfs/extent_io.h
@@ -96,6 +96,7 @@ struct extent_buffer {
 struct btrfs_eb_write_context {
 	struct writeback_control *wbc;
 	struct extent_buffer *eb;
+	struct btrfs_block_group *block_group;
 };
 
 /*
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 5e4285ae112c..a6cdd0c4d7b7 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -1748,30 +1748,35 @@ void btrfs_finish_ordered_zoned(struct btrfs_ordered_extent *ordered)
 }
 
 bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info,
-				    struct extent_buffer *eb,
-				    struct btrfs_block_group **cache_ret)
+				    struct btrfs_eb_write_context *ctx)
 {
-	struct btrfs_block_group *cache;
-	bool ret = true;
+	const struct extent_buffer *eb = ctx->eb;
+	struct btrfs_block_group *block_group = ctx->block_group;
 
 	if (!btrfs_is_zoned(fs_info))
 		return true;
 
-	cache = btrfs_lookup_block_group(fs_info, eb->start);
-	if (!cache)
-		return true;
+	if (block_group) {
+		if (block_group->start > eb->start ||
+		    block_group->start + block_group->length <= eb->start) {
+			btrfs_put_block_group(block_group);
+			block_group = NULL;
+			ctx->block_group = NULL;
+		}
+	}
 
-	if (cache->meta_write_pointer != eb->start) {
-		btrfs_put_block_group(cache);
-		cache = NULL;
-		ret = false;
-	} else {
-		cache->meta_write_pointer = eb->start + eb->len;
+	if (!block_group) {
+		block_group = btrfs_lookup_block_group(fs_info, eb->start);
+		if (!block_group)
+			return true;
+		ctx->block_group = block_group;
 	}
 
-	*cache_ret = cache;
+	if (block_group->meta_write_pointer != eb->start)
+		return false;
+	block_group->meta_write_pointer = eb->start + eb->len;
 
-	return ret;
+	return true;
 }
 
 void btrfs_revert_meta_write_pointer(struct btrfs_block_group *cache,
diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h
index 27322b926038..49d5bd87245c 100644
--- a/fs/btrfs/zoned.h
+++ b/fs/btrfs/zoned.h
@@ -59,8 +59,7 @@ void btrfs_redirty_list_add(struct btrfs_transaction *trans,
 bool btrfs_use_zone_append(struct btrfs_bio *bbio);
 void btrfs_record_physical_zoned(struct btrfs_bio *bbio);
 bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info,
-				    struct extent_buffer *eb,
-				    struct btrfs_block_group **cache_ret);
+				    struct btrfs_eb_write_context *ctx);
 void btrfs_revert_meta_write_pointer(struct btrfs_block_group *cache,
 				     struct extent_buffer *eb);
 int btrfs_zoned_issue_zeroout(struct btrfs_device *device, u64 physical, u64 length);
@@ -190,8 +189,7 @@ static inline void btrfs_record_physical_zoned(struct btrfs_bio *bbio)
 }
 
 static inline bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info,
-			       struct extent_buffer *eb,
-			       struct btrfs_block_group **cache_ret)
+						  struct btrfs_eb_write_context *ctx)
 {
 	return true;
 }
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 03/10] btrfs: zoned: return int from btrfs_check_meta_write_pointer
  2023-07-31 17:17 [PATCH v2 00/10] btrfs: zoned: write-time activation of metadata block group Naohiro Aota
  2023-07-31 17:17 ` [PATCH v2 01/10] btrfs: introduce struct to consolidate extent buffer write context Naohiro Aota
  2023-07-31 17:17 ` [PATCH v2 02/10] btrfs: zoned: introduce block_group context to btrfs_eb_write_context Naohiro Aota
@ 2023-07-31 17:17 ` Naohiro Aota
  2023-08-01  7:56   ` Christoph Hellwig
                     ` (2 more replies)
  2023-07-31 17:17 ` [PATCH v2 04/10] btrfs: zoned: defer advancing meta_write_pointer Naohiro Aota
                   ` (6 subsequent siblings)
  9 siblings, 3 replies; 26+ messages in thread
From: Naohiro Aota @ 2023-07-31 17:17 UTC (permalink / raw)
  To: linux-btrfs; +Cc: hch, josef, dsterba, Naohiro Aota

Now that we have writeback_controll passed to
btrfs_check_meta_write_pointer(), we can move the wbc condition in
submit_eb_page() to btrfs_check_meta_write_pointer() and return int.
---
 fs/btrfs/extent_io.c | 11 +++--------
 fs/btrfs/zoned.c     | 30 ++++++++++++++++++++++--------
 fs/btrfs/zoned.h     | 10 +++++-----
 3 files changed, 30 insertions(+), 21 deletions(-)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index da8d9478972c..012f2853b835 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1825,14 +1825,9 @@ static int submit_eb_page(struct page *page, struct btrfs_eb_write_context *ctx)
 
 	ctx->eb = eb;
 
-	if (!btrfs_check_meta_write_pointer(eb->fs_info, ctx)) {
-		/*
-		 * If for_sync, this hole will be filled with
-		 * trasnsaction commit.
-		 */
-		if (wbc->sync_mode == WB_SYNC_ALL && !wbc->for_sync)
-			ret = -EAGAIN;
-		else
+	ret = btrfs_check_meta_write_pointer(eb->fs_info, ctx);
+	if (ret) {
+		if (ret == -EBUSY)
 			ret = 0;
 		free_extent_buffer(eb);
 		return ret;
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index a6cdd0c4d7b7..0aa32b19adb5 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -1747,14 +1747,23 @@ void btrfs_finish_ordered_zoned(struct btrfs_ordered_extent *ordered)
 	}
 }
 
-bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info,
-				    struct btrfs_eb_write_context *ctx)
+/*
+ * Check @ctx->eb is aligned to the write pointer
+ *
+ * Return:
+ *   0: @ctx->eb is at the write pointer. You can write it.
+ *   -EAGAIN: There is a hole. The caller should handle the case.
+ *   -EBUSY: There is a hole, but the caller can just bail out.
+ */
+int btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info,
+				   struct btrfs_eb_write_context *ctx)
 {
+	const struct writeback_control *wbc = ctx->wbc;
 	const struct extent_buffer *eb = ctx->eb;
 	struct btrfs_block_group *block_group = ctx->block_group;
 
 	if (!btrfs_is_zoned(fs_info))
-		return true;
+		return 0;
 
 	if (block_group) {
 		if (block_group->start > eb->start ||
@@ -1768,15 +1777,20 @@ bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info,
 	if (!block_group) {
 		block_group = btrfs_lookup_block_group(fs_info, eb->start);
 		if (!block_group)
-			return true;
+			return 0;
 		ctx->block_group = block_group;
 	}
 
-	if (block_group->meta_write_pointer != eb->start)
-		return false;
-	block_group->meta_write_pointer = eb->start + eb->len;
+	if (block_group->meta_write_pointer == eb->start) {
+		block_group->meta_write_pointer = eb->start + eb->len;
 
-	return true;
+		return 0;
+	}
+
+	/* If for_sync, this hole will be filled with trasnsaction commit. */
+	if (wbc->sync_mode == WB_SYNC_ALL && !wbc->for_sync)
+		return -EAGAIN;
+	return -EBUSY;
 }
 
 void btrfs_revert_meta_write_pointer(struct btrfs_block_group *cache,
diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h
index 49d5bd87245c..c0859d8be152 100644
--- a/fs/btrfs/zoned.h
+++ b/fs/btrfs/zoned.h
@@ -58,8 +58,8 @@ void btrfs_redirty_list_add(struct btrfs_transaction *trans,
 			    struct extent_buffer *eb);
 bool btrfs_use_zone_append(struct btrfs_bio *bbio);
 void btrfs_record_physical_zoned(struct btrfs_bio *bbio);
-bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info,
-				    struct btrfs_eb_write_context *ctx);
+int btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info,
+				   struct btrfs_eb_write_context *ctx);
 void btrfs_revert_meta_write_pointer(struct btrfs_block_group *cache,
 				     struct extent_buffer *eb);
 int btrfs_zoned_issue_zeroout(struct btrfs_device *device, u64 physical, u64 length);
@@ -188,10 +188,10 @@ static inline void btrfs_record_physical_zoned(struct btrfs_bio *bbio)
 {
 }
 
-static inline bool btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info,
-						  struct btrfs_eb_write_context *ctx)
+static inline int btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info,
+						 struct btrfs_eb_write_context *ctx)
 {
-	return true;
+	return 0;
 }
 
 static inline void btrfs_revert_meta_write_pointer(
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 04/10] btrfs: zoned: defer advancing meta_write_pointer
  2023-07-31 17:17 [PATCH v2 00/10] btrfs: zoned: write-time activation of metadata block group Naohiro Aota
                   ` (2 preceding siblings ...)
  2023-07-31 17:17 ` [PATCH v2 03/10] btrfs: zoned: return int from btrfs_check_meta_write_pointer Naohiro Aota
@ 2023-07-31 17:17 ` Naohiro Aota
  2023-08-01  7:58   ` Christoph Hellwig
  2023-08-01 12:13   ` Johannes Thumshirn
  2023-07-31 17:17 ` [PATCH v2 05/10] btrfs: zoned: update meta_write_pointer on zone finish Naohiro Aota
                   ` (5 subsequent siblings)
  9 siblings, 2 replies; 26+ messages in thread
From: Naohiro Aota @ 2023-07-31 17:17 UTC (permalink / raw)
  To: linux-btrfs; +Cc: hch, josef, dsterba, Naohiro Aota

We currently advance the meta_write_pointer in
btrfs_check_meta_write_pointer(). That make it necessary to revert to it
when locking the buffer failed. Instead, we can advance it just before
sending the buffer.

Also, this is necessary for the following commit. In the commit, it needs
to release the zoned_meta_io_lock to allow IOs to come in and wait for them
to fill the currently active block group. If we advance the
meta_write_pointer before locking the extent buffer, the following extent
buffer can pass the meta_write_pointer check, resuting in an unaligned
write failure.

Advancing the pointer is still thread-safe as the extent buffer is locked.

Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
 fs/btrfs/extent_io.c |  8 ++++----
 fs/btrfs/zoned.c     | 15 +--------------
 fs/btrfs/zoned.h     |  8 --------
 3 files changed, 5 insertions(+), 26 deletions(-)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 012f2853b835..5388c2c3c6f4 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1834,15 +1834,15 @@ static int submit_eb_page(struct page *page, struct btrfs_eb_write_context *ctx)
 	}
 
 	if (!lock_extent_buffer_for_io(eb, wbc)) {
-		btrfs_revert_meta_write_pointer(ctx->block_group, eb);
 		free_extent_buffer(eb);
 		return 0;
 	}
 	if (ctx->block_group) {
-		/*
-		 * Implies write in zoned mode. Mark the last eb in a block group.
-		 */
+		/* Implies write in zoned mode. */
+
+		/* Mark the last eb in the block group. */
 		btrfs_schedule_zone_finish_bg(ctx->block_group, eb);
+		ctx->block_group->meta_write_pointer += eb->len;
 	}
 	write_one_eb(eb, wbc);
 	free_extent_buffer(eb);
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 0aa32b19adb5..fa595eca39ca 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -1781,11 +1781,8 @@ int btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info,
 		ctx->block_group = block_group;
 	}
 
-	if (block_group->meta_write_pointer == eb->start) {
-		block_group->meta_write_pointer = eb->start + eb->len;
-
+	if (block_group->meta_write_pointer == eb->start)
 		return 0;
-	}
 
 	/* If for_sync, this hole will be filled with trasnsaction commit. */
 	if (wbc->sync_mode == WB_SYNC_ALL && !wbc->for_sync)
@@ -1793,16 +1790,6 @@ int btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info,
 	return -EBUSY;
 }
 
-void btrfs_revert_meta_write_pointer(struct btrfs_block_group *cache,
-				     struct extent_buffer *eb)
-{
-	if (!btrfs_is_zoned(eb->fs_info) || !cache)
-		return;
-
-	ASSERT(cache->meta_write_pointer == eb->start + eb->len);
-	cache->meta_write_pointer = eb->start;
-}
-
 int btrfs_zoned_issue_zeroout(struct btrfs_device *device, u64 physical, u64 length)
 {
 	if (!btrfs_dev_is_sequential(device, physical))
diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h
index c0859d8be152..74ec37a25808 100644
--- a/fs/btrfs/zoned.h
+++ b/fs/btrfs/zoned.h
@@ -60,8 +60,6 @@ bool btrfs_use_zone_append(struct btrfs_bio *bbio);
 void btrfs_record_physical_zoned(struct btrfs_bio *bbio);
 int btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info,
 				   struct btrfs_eb_write_context *ctx);
-void btrfs_revert_meta_write_pointer(struct btrfs_block_group *cache,
-				     struct extent_buffer *eb);
 int btrfs_zoned_issue_zeroout(struct btrfs_device *device, u64 physical, u64 length);
 int btrfs_sync_zone_write_pointer(struct btrfs_device *tgt_dev, u64 logical,
 				  u64 physical_start, u64 physical_pos);
@@ -194,12 +192,6 @@ static inline int btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info,
 	return 0;
 }
 
-static inline void btrfs_revert_meta_write_pointer(
-						struct btrfs_block_group *cache,
-						struct extent_buffer *eb)
-{
-}
-
 static inline int btrfs_zoned_issue_zeroout(struct btrfs_device *device,
 					    u64 physical, u64 length)
 {
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 05/10] btrfs: zoned: update meta_write_pointer on zone finish
  2023-07-31 17:17 [PATCH v2 00/10] btrfs: zoned: write-time activation of metadata block group Naohiro Aota
                   ` (3 preceding siblings ...)
  2023-07-31 17:17 ` [PATCH v2 04/10] btrfs: zoned: defer advancing meta_write_pointer Naohiro Aota
@ 2023-07-31 17:17 ` Naohiro Aota
  2023-08-01 12:15   ` Johannes Thumshirn
  2023-07-31 17:17 ` [PATCH v2 06/10] btrfs: zoned: reserve zones for an active metadata/system block group Naohiro Aota
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 26+ messages in thread
From: Naohiro Aota @ 2023-07-31 17:17 UTC (permalink / raw)
  To: linux-btrfs; +Cc: hch, josef, dsterba, Naohiro Aota, Christoph Hellwig

On finishing a zone, the meta_write_pointer should be set of the end of the
zone to reflect the actual write pointer position.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
 fs/btrfs/zoned.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index fa595eca39ca..3902c16b9188 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -2056,6 +2056,9 @@ static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_writ
 
 	clear_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &block_group->runtime_flags);
 	block_group->alloc_offset = block_group->zone_capacity;
+	if (block_group->flags & (BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_SYSTEM))
+		block_group->meta_write_pointer = block_group->start +
+			block_group->zone_capacity;
 	block_group->free_space_ctl->free_space = 0;
 	btrfs_clear_treelog_bg(block_group);
 	btrfs_clear_data_reloc_bg(block_group);
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 06/10] btrfs: zoned: reserve zones for an active metadata/system block group
  2023-07-31 17:17 [PATCH v2 00/10] btrfs: zoned: write-time activation of metadata block group Naohiro Aota
                   ` (4 preceding siblings ...)
  2023-07-31 17:17 ` [PATCH v2 05/10] btrfs: zoned: update meta_write_pointer on zone finish Naohiro Aota
@ 2023-07-31 17:17 ` Naohiro Aota
  2023-08-01 12:23   ` Johannes Thumshirn
  2023-08-02  4:50   ` Naohiro Aota
  2023-07-31 17:17 ` [PATCH v2 07/10] btrfs: zoned: activate metadata block group on write time Naohiro Aota
                   ` (3 subsequent siblings)
  9 siblings, 2 replies; 26+ messages in thread
From: Naohiro Aota @ 2023-07-31 17:17 UTC (permalink / raw)
  To: linux-btrfs; +Cc: hch, josef, dsterba, Naohiro Aota

Ensure a metadata and system block group can be activated on write time, by
leaving a certain number of active zones when trying to activate a data
block group.

When both metadata and system profiles are set to SINGLE, we need to
reserve two zones. When both are DUP, we need to reserve four zones.

In the case only one of them is DUP, we should reserve three zones.
However, handling the case requires at least two bits to track if we have
seen DUP profile for metadata and system, which is cumbersome. So, just
reserve four zones in that case for now.

Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
 fs/btrfs/fs.h    |  6 ++++++
 fs/btrfs/zoned.c | 39 +++++++++++++++++++++++++++++++++++++--
 2 files changed, 43 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/fs.h b/fs/btrfs/fs.h
index ef07c6c252d8..2ce391959b6a 100644
--- a/fs/btrfs/fs.h
+++ b/fs/btrfs/fs.h
@@ -775,6 +775,12 @@ struct btrfs_fs_info {
 	spinlock_t zone_active_bgs_lock;
 	struct list_head zone_active_bgs;
 
+	/*
+	 * Reserved active zones per-device for one metadata and one system
+	 * block group.
+	 */
+	unsigned int reserved_active_zones;
+
 	/* Updates are not protected by any lock */
 	struct btrfs_commit_stats commit_stats;
 
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 3902c16b9188..9dbcd747ee74 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -525,6 +525,12 @@ int btrfs_get_dev_zone_info(struct btrfs_device *device, bool populate_cache)
 		atomic_set(&zone_info->active_zones_left,
 			   max_active_zones - nactive);
 		set_bit(BTRFS_FS_ACTIVE_ZONE_TRACKING, &fs_info->flags);
+		/*
+		 * First, reserve zones for SINGLE metadata and SINGLE system
+		 * profile. The reservation will be increased when seeing DUP
+		 * profile.
+		 */
+		fs_info->reserved_active_zones = 2;
 	}
 
 	/* Validate superblock log */
@@ -1515,6 +1521,22 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
 		}
 		cache->alloc_offset = alloc_offsets[0];
 		cache->zone_capacity = min(caps[0], caps[1]);
+
+		/*
+		 * DUP profile needs two zones on the same device. Reserve 2
+		 * zones * 2 types (metadata and system) = 4 zones.
+		 *
+		 * Technically, we can have SINGLE metadata and DUP system
+		 * config. And, in that case, we only need 3 zones, wasting one
+		 * active zone. But, to do the precise reservation, we need one
+		 * more variable just to track we already seen a DUP block group
+		 * or not, which is cumbersome.
+		 *
+		 * For now, let's be lazy and just reserve 4 zones.
+		 */
+		if (test_bit(BTRFS_FS_ACTIVE_ZONE_TRACKING, &fs_info->flags) &&
+		    !(cache->flags & BTRFS_BLOCK_GROUP_DATA))
+			fs_info->reserved_active_zones = 4;
 		break;
 	case BTRFS_BLOCK_GROUP_RAID1:
 	case BTRFS_BLOCK_GROUP_RAID0:
@@ -1888,6 +1910,8 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
 	struct btrfs_space_info *space_info = block_group->space_info;
 	struct map_lookup *map;
 	struct btrfs_device *device;
+	const unsigned int reserved = (block_group->flags & BTRFS_BLOCK_GROUP_DATA) ?
+		fs_info->reserved_active_zones : 0;
 	u64 physical;
 	bool ret;
 	int i;
@@ -1917,6 +1941,15 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
 		if (device->zone_info->max_active_zones == 0)
 			continue;
 
+		/*
+		 * For the data block group, leave active zones for one
+		 * metadata block group and one system block group.
+		 */
+		if (atomic_read(&device->zone_info->active_zones_left) <= reserved) {
+			ret = false;
+			goto out_unlock;
+		}
+
 		if (!btrfs_dev_set_active_zone(device, physical)) {
 			/* Cannot activate the zone */
 			ret = false;
@@ -2111,6 +2144,8 @@ bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags)
 {
 	struct btrfs_fs_info *fs_info = fs_devices->fs_info;
 	struct btrfs_device *device;
+	const unsigned int reserved = (flags & BTRFS_BLOCK_GROUP_DATA) ?
+		fs_info->reserved_active_zones : 0;
 	bool ret = false;
 
 	if (!btrfs_is_zoned(fs_info))
@@ -2131,10 +2166,10 @@ bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags)
 
 		switch (flags & BTRFS_BLOCK_GROUP_PROFILE_MASK) {
 		case 0: /* single */
-			ret = (atomic_read(&zinfo->active_zones_left) >= 1);
+			ret = (atomic_read(&zinfo->active_zones_left) >= (1 + reserved));
 			break;
 		case BTRFS_BLOCK_GROUP_DUP:
-			ret = (atomic_read(&zinfo->active_zones_left) >= 2);
+			ret = (atomic_read(&zinfo->active_zones_left) >= (2 + reserved));
 			break;
 		}
 		if (ret)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 07/10] btrfs: zoned: activate metadata block group on write time
  2023-07-31 17:17 [PATCH v2 00/10] btrfs: zoned: write-time activation of metadata block group Naohiro Aota
                   ` (5 preceding siblings ...)
  2023-07-31 17:17 ` [PATCH v2 06/10] btrfs: zoned: reserve zones for an active metadata/system block group Naohiro Aota
@ 2023-07-31 17:17 ` Naohiro Aota
  2023-07-31 17:17 ` [PATCH v2 08/10] btrfs: zoned: no longer count fresh BG region as zone unusable Naohiro Aota
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 26+ messages in thread
From: Naohiro Aota @ 2023-07-31 17:17 UTC (permalink / raw)
  To: linux-btrfs; +Cc: hch, josef, dsterba, Naohiro Aota

In the current implementation, block groups are activated at reservation
time to ensure that all reserved bytes can be written to an active metadata
block group. However, this approach has proven to be less efficient, as it
activates block groups more frequently than necessary, putting pressure on
the active zone resource and leading to potential issues such as early
ENOSPC or hung_task.

Another drawback of the current method is that it hampers metadata
over-commit, and necessitates additional flush operations and block group
allocations, resulting in decreased overall performance.

To address these issues, this commit introduces a write-time activation of
metadata and system block group. This involves reserving at least one
active block group specifically for a metadata and system block group.

Since metadata write-out is always allocated sequentially, when we need to
write to a non-active block group, we can wait for the ongoing IOs to
complete, activate a new block group, and then proceed with writing to the
new block group.

Fixes: b09315139136 ("btrfs: zoned: activate metadata block group on flush_space")
CC: stable@vger.kernel.org # 6.1+
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
 fs/btrfs/block-group.c | 11 ++++++
 fs/btrfs/fs.h          |  3 ++
 fs/btrfs/zoned.c       | 83 +++++++++++++++++++++++++++++++++++++++++-
 3 files changed, 95 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index a127865f49f9..b0e432c30e1d 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -4287,6 +4287,17 @@ int btrfs_free_block_groups(struct btrfs_fs_info *info)
 	struct btrfs_caching_control *caching_ctl;
 	struct rb_node *n;
 
+	if (btrfs_is_zoned(info)) {
+		if (info->active_meta_bg) {
+			btrfs_put_block_group(info->active_meta_bg);
+			info->active_meta_bg = NULL;
+		}
+		if (info->active_system_bg) {
+			btrfs_put_block_group(info->active_system_bg);
+			info->active_system_bg = NULL;
+		}
+	}
+
 	write_lock(&info->block_group_cache_lock);
 	while (!list_empty(&info->caching_block_groups)) {
 		caching_ctl = list_entry(info->caching_block_groups.next,
diff --git a/fs/btrfs/fs.h b/fs/btrfs/fs.h
index 2ce391959b6a..bcb43ba55ef6 100644
--- a/fs/btrfs/fs.h
+++ b/fs/btrfs/fs.h
@@ -770,6 +770,9 @@ struct btrfs_fs_info {
 	u64 data_reloc_bg;
 	struct mutex zoned_data_reloc_io_lock;
 
+	struct btrfs_block_group *active_meta_bg;
+	struct btrfs_block_group *active_system_bg;
+
 	u64 nr_global_roots;
 
 	spinlock_t zone_active_bgs_lock;
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 9dbcd747ee74..91eca8b48715 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -65,6 +65,9 @@
 
 #define SUPER_INFO_SECTORS	((u64)BTRFS_SUPER_INFO_SIZE >> SECTOR_SHIFT)
 
+static void wait_eb_writebacks(struct btrfs_block_group *block_group);
+static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_written);
+
 static inline bool sb_zone_is_full(const struct blk_zone *zone)
 {
 	return (zone->cond == BLK_ZONE_COND_FULL) ||
@@ -1769,6 +1772,64 @@ void btrfs_finish_ordered_zoned(struct btrfs_ordered_extent *ordered)
 	}
 }
 
+static bool check_bg_is_active(struct btrfs_eb_write_context *ctx,
+			       struct btrfs_block_group **active_bg)
+{
+	const struct writeback_control *wbc = ctx->wbc;
+	struct btrfs_block_group *block_group = ctx->block_group;
+	struct btrfs_fs_info *fs_info = block_group->fs_info;
+
+	if (test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &block_group->runtime_flags))
+		return true;
+
+	if (fs_info->treelog_bg == block_group->start) {
+		if (!btrfs_zone_activate(block_group)) {
+			int ret_fin = btrfs_zone_finish_one_bg(fs_info);
+
+			if (ret_fin != 1 || !btrfs_zone_activate(block_group))
+				return false;
+		}
+	} else if (*active_bg != block_group) {
+		struct btrfs_block_group *tgt = *active_bg;
+
+		/*
+		 * zoned_meta_io_lock protects fs_info->active_{meta,system}_bg.
+		 */
+		lockdep_assert_held(&fs_info->zoned_meta_io_lock);
+
+		if (tgt) {
+			/*
+			 * If there is an unsent IO left in the allocated area,
+			 * we cannot wait for them as it may cause a deadlock.
+			 */
+			if (tgt->meta_write_pointer < tgt->start + tgt->alloc_offset) {
+				if (wbc->sync_mode == WB_SYNC_NONE ||
+				    (wbc->sync_mode == WB_SYNC_ALL && !wbc->for_sync))
+					return false;
+			}
+
+			/* Pivot active metadata/system block group. */
+			btrfs_zoned_meta_io_unlock(fs_info);
+			wait_eb_writebacks(tgt);
+			do_zone_finish(tgt, true);
+			btrfs_zoned_meta_io_lock(fs_info);
+			if (*active_bg == tgt) {
+				btrfs_put_block_group(tgt);
+				*active_bg = NULL;
+			}
+		}
+		if (!btrfs_zone_activate(block_group))
+			return false;
+		if (*active_bg != block_group) {
+			ASSERT(*active_bg == NULL);
+			*active_bg = block_group;
+			btrfs_get_block_group(block_group);
+		}
+	}
+
+	return true;
+}
+
 /*
  * Check @ctx->eb is aligned to the write pointer
  *
@@ -1803,8 +1864,26 @@ int btrfs_check_meta_write_pointer(struct btrfs_fs_info *fs_info,
 		ctx->block_group = block_group;
 	}
 
-	if (block_group->meta_write_pointer == eb->start)
-		return 0;
+	if (block_group->meta_write_pointer == eb->start) {
+		struct btrfs_block_group **tgt;
+
+		if (!test_bit(BTRFS_FS_ACTIVE_ZONE_TRACKING, &fs_info->flags))
+			return 0;
+
+		if (block_group->flags & BTRFS_BLOCK_GROUP_SYSTEM)
+			tgt = &fs_info->active_system_bg;
+		else
+			tgt = &fs_info->active_meta_bg;
+		if (check_bg_is_active(ctx, tgt))
+			return 0;
+	}
+
+	/*
+	 * Since we may release fs_info->zoned_meta_io_lock, someone can already
+	 * start writing this eb. In that case, we can just bail out.
+	 */
+	if (block_group->meta_write_pointer > eb->start)
+		return -EBUSY;
 
 	/* If for_sync, this hole will be filled with trasnsaction commit. */
 	if (wbc->sync_mode == WB_SYNC_ALL && !wbc->for_sync)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 08/10] btrfs: zoned: no longer count fresh BG region as zone unusable
  2023-07-31 17:17 [PATCH v2 00/10] btrfs: zoned: write-time activation of metadata block group Naohiro Aota
                   ` (6 preceding siblings ...)
  2023-07-31 17:17 ` [PATCH v2 07/10] btrfs: zoned: activate metadata block group on write time Naohiro Aota
@ 2023-07-31 17:17 ` Naohiro Aota
  2023-07-31 17:17 ` [PATCH v2 09/10] btrfs: zoned: don't activate non-DATA BG on allocation Naohiro Aota
  2023-07-31 17:17 ` [PATCH v2 10/10] btrfs: zoned: re-enable metadata over-commit for zoned mode Naohiro Aota
  9 siblings, 0 replies; 26+ messages in thread
From: Naohiro Aota @ 2023-07-31 17:17 UTC (permalink / raw)
  To: linux-btrfs; +Cc: hch, josef, dsterba, Naohiro Aota

Now that we switched to write time activation, we no longer need to (and
must not) count the fresh region as zone unusable. This commit is similar
to revert commit fc22cf8eba79 ("btrfs: zoned: count fresh BG region as zone
unusable").

Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
 fs/btrfs/free-space-cache.c |  8 +-------
 fs/btrfs/zoned.c            | 26 +++-----------------------
 2 files changed, 4 insertions(+), 30 deletions(-)

diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
index cd5bfda2c259..27fad70451aa 100644
--- a/fs/btrfs/free-space-cache.c
+++ b/fs/btrfs/free-space-cache.c
@@ -2704,13 +2704,8 @@ static int __btrfs_add_free_space_zoned(struct btrfs_block_group *block_group,
 		bg_reclaim_threshold = READ_ONCE(sinfo->bg_reclaim_threshold);
 
 	spin_lock(&ctl->tree_lock);
-	/* Count initial region as zone_unusable until it gets activated. */
 	if (!used)
 		to_free = size;
-	else if (initial &&
-		 test_bit(BTRFS_FS_ACTIVE_ZONE_TRACKING, &block_group->fs_info->flags) &&
-		 (block_group->flags & (BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_SYSTEM)))
-		to_free = 0;
 	else if (initial)
 		to_free = block_group->zone_capacity;
 	else if (offset >= block_group->alloc_offset)
@@ -2738,8 +2733,7 @@ static int __btrfs_add_free_space_zoned(struct btrfs_block_group *block_group,
 	reclaimable_unusable = block_group->zone_unusable -
 			       (block_group->length - block_group->zone_capacity);
 	/* All the region is now unusable. Mark it as unused and reclaim */
-	if (block_group->zone_unusable == block_group->length &&
-	    block_group->alloc_offset) {
+	if (block_group->zone_unusable == block_group->length) {
 		btrfs_mark_bg_unused(block_group);
 	} else if (bg_reclaim_threshold &&
 		   reclaimable_unusable >=
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 91eca8b48715..8c2b88be1480 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -1608,19 +1608,9 @@ void btrfs_calc_zone_unusable(struct btrfs_block_group *cache)
 		return;
 
 	WARN_ON(cache->bytes_super != 0);
-
-	/* Check for block groups never get activated */
-	if (test_bit(BTRFS_FS_ACTIVE_ZONE_TRACKING, &cache->fs_info->flags) &&
-	    cache->flags & (BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_SYSTEM) &&
-	    !test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &cache->runtime_flags) &&
-	    cache->alloc_offset == 0) {
-		unusable = cache->length;
-		free = 0;
-	} else {
-		unusable = (cache->alloc_offset - cache->used) +
-			   (cache->length - cache->zone_capacity);
-		free = cache->zone_capacity - cache->alloc_offset;
-	}
+	unusable = (cache->alloc_offset - cache->used) +
+		   (cache->length - cache->zone_capacity);
+	free = cache->zone_capacity - cache->alloc_offset;
 
 	/* We only need ->free_space in ALLOC_SEQ block groups */
 	cache->cached = BTRFS_CACHE_FINISHED;
@@ -1986,7 +1976,6 @@ int btrfs_sync_zone_write_pointer(struct btrfs_device *tgt_dev, u64 logical,
 bool btrfs_zone_activate(struct btrfs_block_group *block_group)
 {
 	struct btrfs_fs_info *fs_info = block_group->fs_info;
-	struct btrfs_space_info *space_info = block_group->space_info;
 	struct map_lookup *map;
 	struct btrfs_device *device;
 	const unsigned int reserved = (block_group->flags & BTRFS_BLOCK_GROUP_DATA) ?
@@ -2000,7 +1989,6 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
 
 	map = block_group->physical_map;
 
-	spin_lock(&space_info->lock);
 	spin_lock(&block_group->lock);
 	if (test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &block_group->runtime_flags)) {
 		ret = true;
@@ -2038,14 +2026,7 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
 
 	/* Successfully activated all the zones */
 	set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &block_group->runtime_flags);
-	WARN_ON(block_group->alloc_offset != 0);
-	if (block_group->zone_unusable == block_group->length) {
-		block_group->zone_unusable = block_group->length - block_group->zone_capacity;
-		space_info->bytes_zone_unusable -= block_group->zone_capacity;
-	}
 	spin_unlock(&block_group->lock);
-	btrfs_try_granting_tickets(fs_info, space_info);
-	spin_unlock(&space_info->lock);
 
 	/* For the active block group list */
 	btrfs_get_block_group(block_group);
@@ -2058,7 +2039,6 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
 
 out_unlock:
 	spin_unlock(&block_group->lock);
-	spin_unlock(&space_info->lock);
 	return ret;
 }
 
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 09/10] btrfs: zoned: don't activate non-DATA BG on allocation
  2023-07-31 17:17 [PATCH v2 00/10] btrfs: zoned: write-time activation of metadata block group Naohiro Aota
                   ` (7 preceding siblings ...)
  2023-07-31 17:17 ` [PATCH v2 08/10] btrfs: zoned: no longer count fresh BG region as zone unusable Naohiro Aota
@ 2023-07-31 17:17 ` Naohiro Aota
  2023-08-01 12:34   ` Johannes Thumshirn
  2023-07-31 17:17 ` [PATCH v2 10/10] btrfs: zoned: re-enable metadata over-commit for zoned mode Naohiro Aota
  9 siblings, 1 reply; 26+ messages in thread
From: Naohiro Aota @ 2023-07-31 17:17 UTC (permalink / raw)
  To: linux-btrfs; +Cc: hch, josef, dsterba, Naohiro Aota

Now that, a non-DATA block group is activated at write time. Don't activate
it on allocation time.

Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
 fs/btrfs/block-group.c |  2 +-
 fs/btrfs/extent-tree.c |  8 +++++++-
 fs/btrfs/space-info.c  | 28 ----------------------------
 3 files changed, 8 insertions(+), 30 deletions(-)

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index b0e432c30e1d..0cb1dee965a0 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -4089,7 +4089,7 @@ int btrfs_chunk_alloc(struct btrfs_trans_handle *trans, u64 flags,
 
 	if (IS_ERR(ret_bg)) {
 		ret = PTR_ERR(ret_bg);
-	} else if (from_extent_allocation) {
+	} else if (from_extent_allocation && (flags & BTRFS_BLOCK_GROUP_DATA)) {
 		/*
 		 * New block group is likely to be used soon. Try to activate
 		 * it now. Failure is OK for now.
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 12bd8dc37385..92eccb0cd487 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -3690,7 +3690,9 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
 	}
 	spin_unlock(&block_group->lock);
 
-	if (!ret && !btrfs_zone_activate(block_group)) {
+	/* Metadata block group is activated on write time. */
+	if (!ret && (block_group->flags & BTRFS_BLOCK_GROUP_DATA) &&
+	    !btrfs_zone_activate(block_group)) {
 		ret = 1;
 		/*
 		 * May need to clear fs_info->{treelog,data_reloc}_bg.
@@ -3870,6 +3872,10 @@ static void found_extent(struct find_free_extent_ctl *ffe_ctl,
 static int can_allocate_chunk_zoned(struct btrfs_fs_info *fs_info,
 				    struct find_free_extent_ctl *ffe_ctl)
 {
+	/* Block group's activeness is not a requirement for METADATA block groups. */
+	if (!(ffe_ctl->flags & BTRFS_BLOCK_GROUP_DATA))
+		return 0;
+
 	/* If we can activate new zone, just allocate a chunk and use it */
 	if (btrfs_can_activate_zone(fs_info->fs_devices, ffe_ctl->flags))
 		return 0;
diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
index 17c86db7b1b1..356638f54fef 100644
--- a/fs/btrfs/space-info.c
+++ b/fs/btrfs/space-info.c
@@ -761,18 +761,6 @@ static void flush_space(struct btrfs_fs_info *fs_info,
 		break;
 	case ALLOC_CHUNK:
 	case ALLOC_CHUNK_FORCE:
-		/*
-		 * For metadata space on zoned filesystem, reaching here means we
-		 * don't have enough space left in active_total_bytes. Try to
-		 * activate a block group first, because we may have inactive
-		 * block group already allocated.
-		 */
-		ret = btrfs_zoned_activate_one_bg(fs_info, space_info, false);
-		if (ret < 0)
-			break;
-		else if (ret == 1)
-			break;
-
 		trans = btrfs_join_transaction(root);
 		if (IS_ERR(trans)) {
 			ret = PTR_ERR(trans);
@@ -784,22 +772,6 @@ static void flush_space(struct btrfs_fs_info *fs_info,
 					CHUNK_ALLOC_FORCE);
 		btrfs_end_transaction(trans);
 
-		/*
-		 * For metadata space on zoned filesystem, allocating a new chunk
-		 * is not enough. We still need to activate the block * group.
-		 * Active the newly allocated block group by (maybe) finishing
-		 * a block group.
-		 */
-		if (ret == 1) {
-			ret = btrfs_zoned_activate_one_bg(fs_info, space_info, true);
-			/*
-			 * Revert to the original ret regardless we could finish
-			 * one block group or not.
-			 */
-			if (ret >= 0)
-				ret = 1;
-		}
-
 		if (ret > 0 || ret == -ENOSPC)
 			ret = 0;
 		break;
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 10/10] btrfs: zoned: re-enable metadata over-commit for zoned mode
  2023-07-31 17:17 [PATCH v2 00/10] btrfs: zoned: write-time activation of metadata block group Naohiro Aota
                   ` (8 preceding siblings ...)
  2023-07-31 17:17 ` [PATCH v2 09/10] btrfs: zoned: don't activate non-DATA BG on allocation Naohiro Aota
@ 2023-07-31 17:17 ` Naohiro Aota
  2023-08-01 12:35   ` Johannes Thumshirn
  9 siblings, 1 reply; 26+ messages in thread
From: Naohiro Aota @ 2023-07-31 17:17 UTC (permalink / raw)
  To: linux-btrfs; +Cc: hch, josef, dsterba, Naohiro Aota

Now that, we can re-enable metadata over-commit. As we moved the activation
from the reservation time to the write time, we no longer need to ensure
all the reserved bytes is properly activated.

Without the metadata over-commit, it suffers from lower performance because
it needs to flush the delalloc items more often and allocate more block
groups. Re-enabling metadata over-commit will solve the issue.

Fixes: 79417d040f4f ("btrfs: zoned: disable metadata overcommit for zoned")
CC: stable@vger.kernel.org # 6.1+
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
 fs/btrfs/space-info.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
index 356638f54fef..d7e8cd4f140c 100644
--- a/fs/btrfs/space-info.c
+++ b/fs/btrfs/space-info.c
@@ -389,11 +389,7 @@ int btrfs_can_overcommit(struct btrfs_fs_info *fs_info,
 		return 0;
 
 	used = btrfs_space_info_used(space_info, true);
-	if (test_bit(BTRFS_FS_ACTIVE_ZONE_TRACKING, &fs_info->flags) &&
-	    (space_info->flags & BTRFS_BLOCK_GROUP_METADATA))
-		avail = 0;
-	else
-		avail = calc_available_free_space(fs_info, space_info, flush);
+	avail = calc_available_free_space(fs_info, space_info, flush);
 
 	if (used + bytes < space_info->total_bytes + avail)
 		return 1;
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 01/10] btrfs: introduce struct to consolidate extent buffer write context
  2023-07-31 17:17 ` [PATCH v2 01/10] btrfs: introduce struct to consolidate extent buffer write context Naohiro Aota
@ 2023-08-01  7:53   ` Christoph Hellwig
  2023-08-01 11:59   ` Johannes Thumshirn
  1 sibling, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-08-01  7:53 UTC (permalink / raw)
  To: Naohiro Aota; +Cc: linux-btrfs, hch, josef, dsterba

> +	struct btrfs_eb_write_context ctx = {
> +		.wbc = wbc,
> +		.eb = NULL,

You can drop the eb initilization here, as all not named fiels are
implicitly zeroed.

With that:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 02/10] btrfs: zoned: introduce block_group context to btrfs_eb_write_context
  2023-07-31 17:17 ` [PATCH v2 02/10] btrfs: zoned: introduce block_group context to btrfs_eb_write_context Naohiro Aota
@ 2023-08-01  7:55   ` Christoph Hellwig
  2023-08-01 12:05   ` Johannes Thumshirn
  1 sibling, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-08-01  7:55 UTC (permalink / raw)
  To: Naohiro Aota; +Cc: linux-btrfs, hch, josef, dsterba

>  	struct btrfs_eb_write_context ctx = {
>  		.wbc = wbc,
>  		.eb = NULL,
> +		.block_group = NULL,

Same comment as for the last patch.

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 03/10] btrfs: zoned: return int from btrfs_check_meta_write_pointer
  2023-07-31 17:17 ` [PATCH v2 03/10] btrfs: zoned: return int from btrfs_check_meta_write_pointer Naohiro Aota
@ 2023-08-01  7:56   ` Christoph Hellwig
  2023-08-01 12:07   ` Johannes Thumshirn
  2023-08-02  0:20   ` Naohiro Aota
  2 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-08-01  7:56 UTC (permalink / raw)
  To: Naohiro Aota; +Cc: linux-btrfs, hch, josef, dsterba

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 04/10] btrfs: zoned: defer advancing meta_write_pointer
  2023-07-31 17:17 ` [PATCH v2 04/10] btrfs: zoned: defer advancing meta_write_pointer Naohiro Aota
@ 2023-08-01  7:58   ` Christoph Hellwig
  2023-08-02  1:35     ` Naohiro Aota
  2023-08-01 12:13   ` Johannes Thumshirn
  1 sibling, 1 reply; 26+ messages in thread
From: Christoph Hellwig @ 2023-08-01  7:58 UTC (permalink / raw)
  To: Naohiro Aota; +Cc: linux-btrfs, hch, josef, dsterba

On Tue, Aug 01, 2023 at 02:17:13AM +0900, Naohiro Aota wrote:
>  	if (!lock_extent_buffer_for_io(eb, wbc)) {
> -		btrfs_revert_meta_write_pointer(ctx->block_group, eb);
>  		free_extent_buffer(eb);
>  		return 0;
>  	}
>  	if (ctx->block_group) {
> -		/*
> -		 * Implies write in zoned mode. Mark the last eb in a block group.
> -		 */
> +		/* Implies write in zoned mode. */

.. maybe ->block_group should be named ->zoned_bg to make this
implication very clear to everyone touching the code?

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 01/10] btrfs: introduce struct to consolidate extent buffer write context
  2023-07-31 17:17 ` [PATCH v2 01/10] btrfs: introduce struct to consolidate extent buffer write context Naohiro Aota
  2023-08-01  7:53   ` Christoph Hellwig
@ 2023-08-01 11:59   ` Johannes Thumshirn
  1 sibling, 0 replies; 26+ messages in thread
From: Johannes Thumshirn @ 2023-08-01 11:59 UTC (permalink / raw)
  To: Naohiro Aota, linux-btrfs; +Cc: hch, josef, dsterba

With Christoph's comment fixed:

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 02/10] btrfs: zoned: introduce block_group context to btrfs_eb_write_context
  2023-07-31 17:17 ` [PATCH v2 02/10] btrfs: zoned: introduce block_group context to btrfs_eb_write_context Naohiro Aota
  2023-08-01  7:55   ` Christoph Hellwig
@ 2023-08-01 12:05   ` Johannes Thumshirn
  1 sibling, 0 replies; 26+ messages in thread
From: Johannes Thumshirn @ 2023-08-01 12:05 UTC (permalink / raw)
  To: Naohiro Aota, linux-btrfs; +Cc: hch, josef, dsterba

Looks good,

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 03/10] btrfs: zoned: return int from btrfs_check_meta_write_pointer
  2023-07-31 17:17 ` [PATCH v2 03/10] btrfs: zoned: return int from btrfs_check_meta_write_pointer Naohiro Aota
  2023-08-01  7:56   ` Christoph Hellwig
@ 2023-08-01 12:07   ` Johannes Thumshirn
  2023-08-02  0:20   ` Naohiro Aota
  2 siblings, 0 replies; 26+ messages in thread
From: Johannes Thumshirn @ 2023-08-01 12:07 UTC (permalink / raw)
  To: Naohiro Aota, linux-btrfs; +Cc: hch, josef, dsterba

Looks good,

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 04/10] btrfs: zoned: defer advancing meta_write_pointer
  2023-07-31 17:17 ` [PATCH v2 04/10] btrfs: zoned: defer advancing meta_write_pointer Naohiro Aota
  2023-08-01  7:58   ` Christoph Hellwig
@ 2023-08-01 12:13   ` Johannes Thumshirn
  1 sibling, 0 replies; 26+ messages in thread
From: Johannes Thumshirn @ 2023-08-01 12:13 UTC (permalink / raw)
  To: Naohiro Aota, linux-btrfs; +Cc: hch, josef, dsterba

Looks good,

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 05/10] btrfs: zoned: update meta_write_pointer on zone finish
  2023-07-31 17:17 ` [PATCH v2 05/10] btrfs: zoned: update meta_write_pointer on zone finish Naohiro Aota
@ 2023-08-01 12:15   ` Johannes Thumshirn
  0 siblings, 0 replies; 26+ messages in thread
From: Johannes Thumshirn @ 2023-08-01 12:15 UTC (permalink / raw)
  To: Naohiro Aota, linux-btrfs; +Cc: hch, josef, dsterba, Christoph Hellwig

Looks good,

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 06/10] btrfs: zoned: reserve zones for an active metadata/system block group
  2023-07-31 17:17 ` [PATCH v2 06/10] btrfs: zoned: reserve zones for an active metadata/system block group Naohiro Aota
@ 2023-08-01 12:23   ` Johannes Thumshirn
  2023-08-02  4:50   ` Naohiro Aota
  1 sibling, 0 replies; 26+ messages in thread
From: Johannes Thumshirn @ 2023-08-01 12:23 UTC (permalink / raw)
  To: Naohiro Aota, linux-btrfs; +Cc: hch, josef, dsterba

Looks good,

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 09/10] btrfs: zoned: don't activate non-DATA BG on allocation
  2023-07-31 17:17 ` [PATCH v2 09/10] btrfs: zoned: don't activate non-DATA BG on allocation Naohiro Aota
@ 2023-08-01 12:34   ` Johannes Thumshirn
  0 siblings, 0 replies; 26+ messages in thread
From: Johannes Thumshirn @ 2023-08-01 12:34 UTC (permalink / raw)
  To: Naohiro Aota, linux-btrfs; +Cc: hch, josef, dsterba

Looks good,

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 10/10] btrfs: zoned: re-enable metadata over-commit for zoned mode
  2023-07-31 17:17 ` [PATCH v2 10/10] btrfs: zoned: re-enable metadata over-commit for zoned mode Naohiro Aota
@ 2023-08-01 12:35   ` Johannes Thumshirn
  0 siblings, 0 replies; 26+ messages in thread
From: Johannes Thumshirn @ 2023-08-01 12:35 UTC (permalink / raw)
  To: Naohiro Aota, linux-btrfs; +Cc: hch, josef, dsterba

Looks good,

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 03/10] btrfs: zoned: return int from btrfs_check_meta_write_pointer
  2023-07-31 17:17 ` [PATCH v2 03/10] btrfs: zoned: return int from btrfs_check_meta_write_pointer Naohiro Aota
  2023-08-01  7:56   ` Christoph Hellwig
  2023-08-01 12:07   ` Johannes Thumshirn
@ 2023-08-02  0:20   ` Naohiro Aota
  2 siblings, 0 replies; 26+ messages in thread
From: Naohiro Aota @ 2023-08-02  0:20 UTC (permalink / raw)
  To: linux-btrfs; +Cc: hch, josef, dsterba

On Tue, Aug 01, 2023 at 02:17:12AM +0900, Naohiro Aota wrote:
> Now that we have writeback_controll passed to
> btrfs_check_meta_write_pointer(), we can move the wbc condition in
> submit_eb_page() to btrfs_check_meta_write_pointer() and return int.

Oops, I forgot to sign this. Just in case,

Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 04/10] btrfs: zoned: defer advancing meta_write_pointer
  2023-08-01  7:58   ` Christoph Hellwig
@ 2023-08-02  1:35     ` Naohiro Aota
  0 siblings, 0 replies; 26+ messages in thread
From: Naohiro Aota @ 2023-08-02  1:35 UTC (permalink / raw)
  To: hch; +Cc: linux-btrfs, josef, dsterba

On Tue, Aug 01, 2023 at 12:58:04AM -0700, Christoph Hellwig wrote:
> On Tue, Aug 01, 2023 at 02:17:13AM +0900, Naohiro Aota wrote:
> >  	if (!lock_extent_buffer_for_io(eb, wbc)) {
> > -		btrfs_revert_meta_write_pointer(ctx->block_group, eb);
> >  		free_extent_buffer(eb);
> >  		return 0;
> >  	}
> >  	if (ctx->block_group) {
> > -		/*
> > -		 * Implies write in zoned mode. Mark the last eb in a block group.
> > -		 */
> > +		/* Implies write in zoned mode. */
> 
> .. maybe ->block_group should be named ->zoned_bg to make this
> implication very clear to everyone touching the code?

Indeed. I'll modify the patch 2 and add a comment as well.

> 
> Otherwise looks good:
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 06/10] btrfs: zoned: reserve zones for an active metadata/system block group
  2023-07-31 17:17 ` [PATCH v2 06/10] btrfs: zoned: reserve zones for an active metadata/system block group Naohiro Aota
  2023-08-01 12:23   ` Johannes Thumshirn
@ 2023-08-02  4:50   ` Naohiro Aota
  1 sibling, 0 replies; 26+ messages in thread
From: Naohiro Aota @ 2023-08-02  4:50 UTC (permalink / raw)
  To: linux-btrfs; +Cc: hch, josef, dsterba

On Tue, Aug 01, 2023 at 02:17:15AM +0900, Naohiro Aota wrote:
> Ensure a metadata and system block group can be activated on write time, by
> leaving a certain number of active zones when trying to activate a data
> block group.
> 
> When both metadata and system profiles are set to SINGLE, we need to
> reserve two zones. When both are DUP, we need to reserve four zones.
> 
> In the case only one of them is DUP, we should reserve three zones.
> However, handling the case requires at least two bits to track if we have
> seen DUP profile for metadata and system, which is cumbersome. So, just
> reserve four zones in that case for now.
> 
> Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>

I noticed this over-reserves the zones. Even when there are already
metadata and system block group allocated, it still leaves 2 (or 4 in
DUP) zones to be claimed by a data block group.

The reservation count must be increased when we free a metadata block group
and increased when we allocate one.

Or, in fact, we only need to reserve the zones when pivoting the block
group. With the zoned_meta_io_lock, metadata and system block group won't
pivot at the same time. So, adding a bit BTRFS_*_ZONED_PIVOT_META_BG would
be enough.

Anyway, I'll rework this patch.

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2023-08-02  4:50 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-31 17:17 [PATCH v2 00/10] btrfs: zoned: write-time activation of metadata block group Naohiro Aota
2023-07-31 17:17 ` [PATCH v2 01/10] btrfs: introduce struct to consolidate extent buffer write context Naohiro Aota
2023-08-01  7:53   ` Christoph Hellwig
2023-08-01 11:59   ` Johannes Thumshirn
2023-07-31 17:17 ` [PATCH v2 02/10] btrfs: zoned: introduce block_group context to btrfs_eb_write_context Naohiro Aota
2023-08-01  7:55   ` Christoph Hellwig
2023-08-01 12:05   ` Johannes Thumshirn
2023-07-31 17:17 ` [PATCH v2 03/10] btrfs: zoned: return int from btrfs_check_meta_write_pointer Naohiro Aota
2023-08-01  7:56   ` Christoph Hellwig
2023-08-01 12:07   ` Johannes Thumshirn
2023-08-02  0:20   ` Naohiro Aota
2023-07-31 17:17 ` [PATCH v2 04/10] btrfs: zoned: defer advancing meta_write_pointer Naohiro Aota
2023-08-01  7:58   ` Christoph Hellwig
2023-08-02  1:35     ` Naohiro Aota
2023-08-01 12:13   ` Johannes Thumshirn
2023-07-31 17:17 ` [PATCH v2 05/10] btrfs: zoned: update meta_write_pointer on zone finish Naohiro Aota
2023-08-01 12:15   ` Johannes Thumshirn
2023-07-31 17:17 ` [PATCH v2 06/10] btrfs: zoned: reserve zones for an active metadata/system block group Naohiro Aota
2023-08-01 12:23   ` Johannes Thumshirn
2023-08-02  4:50   ` Naohiro Aota
2023-07-31 17:17 ` [PATCH v2 07/10] btrfs: zoned: activate metadata block group on write time Naohiro Aota
2023-07-31 17:17 ` [PATCH v2 08/10] btrfs: zoned: no longer count fresh BG region as zone unusable Naohiro Aota
2023-07-31 17:17 ` [PATCH v2 09/10] btrfs: zoned: don't activate non-DATA BG on allocation Naohiro Aota
2023-08-01 12:34   ` Johannes Thumshirn
2023-07-31 17:17 ` [PATCH v2 10/10] btrfs: zoned: re-enable metadata over-commit for zoned mode Naohiro Aota
2023-08-01 12:35   ` Johannes Thumshirn

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.