linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH STABLE 5.18 0/3] btrfs: backport zoned mode fixes
@ 2022-08-08  1:32 Naohiro Aota
  2022-08-08  1:32 ` [PATCH STABLE 5.18 1/3] btrfs: zoned: prevent allocation from previous data relocation BG Naohiro Aota
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Naohiro Aota @ 2022-08-08  1:32 UTC (permalink / raw)
  To: linux-btrfs, stable; +Cc: Naohiro Aota

These patches are backport for the 5.18 branch.

They all fixes zoned mode related issued on btrfs.

The patch 3 looks different from upstream commit b3a3b0255797 ("btrfs:
zoned: drop optimization of zone finish") as a refactoring patch is not
picked into the stable branch. But, essentially, they do the same thing
which always zone finish the zones after (nearly) full write.

Naohiro Aota (3):
  btrfs: zoned: prevent allocation from previous data relocation BG
  btrfs: zoned: fix critical section of relocation inode writeback
  btrfs: zoned: drop optimization of zone finish

 fs/btrfs/block-group.h |  1 +
 fs/btrfs/extent-tree.c | 20 +++++++++++++++--
 fs/btrfs/extent_io.c   |  3 ++-
 fs/btrfs/inode.c       |  2 ++
 fs/btrfs/zoned.c       | 49 +++++++++++++++++++++++++++++++++++++-----
 fs/btrfs/zoned.h       |  5 +++++
 6 files changed, 72 insertions(+), 8 deletions(-)

-- 
2.35.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH STABLE 5.18 1/3] btrfs: zoned: prevent allocation from previous data relocation BG
  2022-08-08  1:32 [PATCH STABLE 5.18 0/3] btrfs: backport zoned mode fixes Naohiro Aota
@ 2022-08-08  1:32 ` Naohiro Aota
  2022-08-08  1:32 ` [PATCH STABLE 5.18 2/3] btrfs: zoned: fix critical section of relocation inode writeback Naohiro Aota
  2022-08-08  1:32 ` [PATCH STABLE 5.18 3/3] btrfs: zoned: drop optimization of zone finish Naohiro Aota
  2 siblings, 0 replies; 5+ messages in thread
From: Naohiro Aota @ 2022-08-08  1:32 UTC (permalink / raw)
  To: linux-btrfs, stable; +Cc: Naohiro Aota, David Sterba

commit 343d8a30851c48a4ef0f5ef61d5e9fbd847a6883 upstream

After commit 5f0addf7b890 ("btrfs: zoned: use dedicated lock for data
relocation"), we observe IO errors on e.g, btrfs/232 like below.

  [09.0][T4038707] WARNING: CPU: 3 PID: 4038707 at fs/btrfs/extent-tree.c:2381 btrfs_cross_ref_exist+0xfc/0x120 [btrfs]
  <snip>
  [09.9][T4038707] Call Trace:
  [09.5][T4038707]  <TASK>
  [09.3][T4038707]  run_delalloc_nocow+0x7f1/0x11a0 [btrfs]
  [09.6][T4038707]  ? test_range_bit+0x174/0x320 [btrfs]
  [09.2][T4038707]  ? fallback_to_cow+0x980/0x980 [btrfs]
  [09.3][T4038707]  ? find_lock_delalloc_range+0x33e/0x3e0 [btrfs]
  [09.5][T4038707]  btrfs_run_delalloc_range+0x445/0x1320 [btrfs]
  [09.2][T4038707]  ? test_range_bit+0x320/0x320 [btrfs]
  [09.4][T4038707]  ? lock_downgrade+0x6a0/0x6a0
  [09.2][T4038707]  ? orc_find.part.0+0x1ed/0x300
  [09.5][T4038707]  ? __module_address.part.0+0x25/0x300
  [09.0][T4038707]  writepage_delalloc+0x159/0x310 [btrfs]
  <snip>
  [09.4][    C3] sd 10:0:1:0: [sde] tag#2620 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
  [09.5][    C3] sd 10:0:1:0: [sde] tag#2620 Sense Key : Illegal Request [current]
  [09.9][    C3] sd 10:0:1:0: [sde] tag#2620 Add. Sense: Unaligned write command
  [09.5][    C3] sd 10:0:1:0: [sde] tag#2620 CDB: Write(16) 8a 00 00 00 00 00 02 f3 63 87 00 00 00 2c 00 00
  [09.4][    C3] critical target error, dev sde, sector 396041272 op 0x1:(WRITE) flags 0x800 phys_seg 3 prio class 0
  [09.9][    C3] BTRFS error (device dm-1): bdev /dev/mapper/dml_102_2 errs: wr 1, rd 0, flush 0, corrupt 0, gen 0

The IO errors occur when we allocate a regular extent in previous data
relocation block group.

On zoned btrfs, we use a dedicated block group to relocate a data
extent. Thus, we allocate relocating data extents (pre-alloc) only from
the dedicated block group and vice versa. Once the free space in the
dedicated block group gets tight, a relocating extent may not fit into
the block group. In that case, we need to switch the dedicated block
group to the next one. Then, the previous one is now freed up for
allocating a regular extent. The BG is already not enough to allocate
the relocating extent, but there is still room to allocate a smaller
extent. Now the problem happens. By allocating a regular extent while
nocow IOs for the relocation is still on-going, we will issue WRITE IOs
(for relocation) and ZONE APPEND IOs (for the regular writes) at the
same time. That mixed IOs confuses the write pointer and arises the
unaligned write errors.

This commit introduces a new bit 'zoned_data_reloc_ongoing' to the
btrfs_block_group. We set this bit before releasing the dedicated block
group, and no extent are allocated from a block group having this bit
set. This bit is similar to setting block_group->ro, but is different from
it by allowing nocow writes to start.

Once all the nocow IO for relocation is done (hooked from
btrfs_finish_ordered_io), we reset the bit to release the block group for
further allocation.

Fixes: c2707a255623 ("btrfs: zoned: add a dedicated data relocation block group")
CC: stable@vger.kernel.org # 5.16+
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/block-group.h |  1 +
 fs/btrfs/extent-tree.c | 20 ++++++++++++++++++--
 fs/btrfs/inode.c       |  2 ++
 fs/btrfs/zoned.c       | 27 +++++++++++++++++++++++++++
 fs/btrfs/zoned.h       |  5 +++++
 5 files changed, 53 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
index 19db5693175f..2a0ead57db71 100644
--- a/fs/btrfs/block-group.h
+++ b/fs/btrfs/block-group.h
@@ -104,6 +104,7 @@ struct btrfs_block_group {
 	unsigned int relocating_repair:1;
 	unsigned int chunk_item_inserted:1;
 	unsigned int zone_is_active:1;
+	unsigned int zoned_data_reloc_ongoing:1;
 
 	int disk_cache_state;
 
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 6aa92f84f465..f45ecd939a2c 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -3836,7 +3836,7 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
 	       block_group->start == fs_info->data_reloc_bg ||
 	       fs_info->data_reloc_bg == 0);
 
-	if (block_group->ro) {
+	if (block_group->ro || block_group->zoned_data_reloc_ongoing) {
 		ret = 1;
 		goto out;
 	}
@@ -3898,8 +3898,24 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
 out:
 	if (ret && ffe_ctl->for_treelog)
 		fs_info->treelog_bg = 0;
-	if (ret && ffe_ctl->for_data_reloc)
+	if (ret && ffe_ctl->for_data_reloc &&
+	    fs_info->data_reloc_bg == block_group->start) {
+		/*
+		 * Do not allow further allocations from this block group.
+		 * Compared to increasing the ->ro, setting the
+		 * ->zoned_data_reloc_ongoing flag still allows nocow
+		 *  writers to come in. See btrfs_inc_nocow_writers().
+		 *
+		 * We need to disable an allocation to avoid an allocation of
+		 * regular (non-relocation data) extent. With mix of relocation
+		 * extents and regular extents, we can dispatch WRITE commands
+		 * (for relocation extents) and ZONE APPEND commands (for
+		 * regular extents) at the same time to the same zone, which
+		 * easily break the write pointer.
+		 */
+		block_group->zoned_data_reloc_ongoing = 1;
 		fs_info->data_reloc_bg = 0;
+	}
 	spin_unlock(&fs_info->relocation_bg_lock);
 	spin_unlock(&fs_info->treelog_bg_lock);
 	spin_unlock(&block_group->lock);
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 9ae79342631a..5d15e374d032 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -3102,6 +3102,8 @@ static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent)
 						ordered_extent->file_offset,
 						ordered_extent->file_offset +
 						logical_len);
+		btrfs_zoned_release_data_reloc_bg(fs_info, ordered_extent->disk_bytenr,
+						  ordered_extent->disk_num_bytes);
 	} else {
 		BUG_ON(root == fs_info->tree_root);
 		ret = insert_ordered_extent_file_extent(trans, ordered_extent);
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 5091d679a602..2c0851d94eff 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -2116,3 +2116,30 @@ void btrfs_free_zone_cache(struct btrfs_fs_info *fs_info)
 	}
 	mutex_unlock(&fs_devices->device_list_mutex);
 }
+
+void btrfs_zoned_release_data_reloc_bg(struct btrfs_fs_info *fs_info, u64 logical,
+				       u64 length)
+{
+	struct btrfs_block_group *block_group;
+
+	if (!btrfs_is_zoned(fs_info))
+		return;
+
+	block_group = btrfs_lookup_block_group(fs_info, logical);
+	/* It should be called on a previous data relocation block group. */
+	ASSERT(block_group && (block_group->flags & BTRFS_BLOCK_GROUP_DATA));
+
+	spin_lock(&block_group->lock);
+	if (!block_group->zoned_data_reloc_ongoing)
+		goto out;
+
+	/* All relocation extents are written. */
+	if (block_group->start + block_group->alloc_offset == logical + length) {
+		/* Now, release this block group for further allocations. */
+		block_group->zoned_data_reloc_ongoing = 0;
+	}
+
+out:
+	spin_unlock(&block_group->lock);
+	btrfs_put_block_group(block_group);
+}
diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h
index 2d898970aec5..cf6320feef46 100644
--- a/fs/btrfs/zoned.h
+++ b/fs/btrfs/zoned.h
@@ -80,6 +80,8 @@ void btrfs_schedule_zone_finish_bg(struct btrfs_block_group *bg,
 				   struct extent_buffer *eb);
 void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg);
 void btrfs_free_zone_cache(struct btrfs_fs_info *fs_info);
+void btrfs_zoned_release_data_reloc_bg(struct btrfs_fs_info *fs_info, u64 logical,
+				       u64 length);
 #else /* CONFIG_BLK_DEV_ZONED */
 static inline int btrfs_get_dev_zone(struct btrfs_device *device, u64 pos,
 				     struct blk_zone *zone)
@@ -241,6 +243,9 @@ static inline void btrfs_schedule_zone_finish_bg(struct btrfs_block_group *bg,
 static inline void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg) { }
 
 static inline void btrfs_free_zone_cache(struct btrfs_fs_info *fs_info) { }
+
+static inline void btrfs_zoned_release_data_reloc_bg(struct btrfs_fs_info *fs_info,
+						     u64 logical, u64 length) { }
 #endif
 
 static inline bool btrfs_dev_is_sequential(struct btrfs_device *device, u64 pos)
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH STABLE 5.18 2/3] btrfs: zoned: fix critical section of relocation inode writeback
  2022-08-08  1:32 [PATCH STABLE 5.18 0/3] btrfs: backport zoned mode fixes Naohiro Aota
  2022-08-08  1:32 ` [PATCH STABLE 5.18 1/3] btrfs: zoned: prevent allocation from previous data relocation BG Naohiro Aota
@ 2022-08-08  1:32 ` Naohiro Aota
  2022-08-08  1:32 ` [PATCH STABLE 5.18 3/3] btrfs: zoned: drop optimization of zone finish Naohiro Aota
  2 siblings, 0 replies; 5+ messages in thread
From: Naohiro Aota @ 2022-08-08  1:32 UTC (permalink / raw)
  To: linux-btrfs, stable; +Cc: Naohiro Aota, Johannes Thumshirn, David Sterba

commit 19ab78ca86981e0e1e73036fb73a508731a7c078 upstream

We use btrfs_zoned_data_reloc_{lock,unlock} to allow only one process to
write out to the relocation inode. That critical section must include all
the IO submission for the inode. However, flush_write_bio() in
extent_writepages() is out of the critical section, causing an IO
submission outside of the lock. This leads to an out of the order IO
submission and fail the relocation process.

Fix it by extending the critical section.

Fixes: 35156d852762 ("btrfs: zoned: only allow one process to add pages to a relocation inode")
CC: stable@vger.kernel.org # 5.16+
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/extent_io.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index a23a42ba88ca..68ddd90685d9 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -5214,13 +5214,14 @@ int extent_writepages(struct address_space *mapping,
 	 */
 	btrfs_zoned_data_reloc_lock(BTRFS_I(inode));
 	ret = extent_write_cache_pages(mapping, wbc, &epd);
-	btrfs_zoned_data_reloc_unlock(BTRFS_I(inode));
 	ASSERT(ret <= 0);
 	if (ret < 0) {
+		btrfs_zoned_data_reloc_unlock(BTRFS_I(inode));
 		end_write_bio(&epd, ret);
 		return ret;
 	}
 	ret = flush_write_bio(&epd);
+	btrfs_zoned_data_reloc_unlock(BTRFS_I(inode));
 	return ret;
 }
 
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH STABLE 5.18 3/3] btrfs: zoned: drop optimization of zone finish
  2022-08-08  1:32 [PATCH STABLE 5.18 0/3] btrfs: backport zoned mode fixes Naohiro Aota
  2022-08-08  1:32 ` [PATCH STABLE 5.18 1/3] btrfs: zoned: prevent allocation from previous data relocation BG Naohiro Aota
  2022-08-08  1:32 ` [PATCH STABLE 5.18 2/3] btrfs: zoned: fix critical section of relocation inode writeback Naohiro Aota
@ 2022-08-08  1:32 ` Naohiro Aota
  2022-08-08  2:00   ` Naohiro Aota
  2 siblings, 1 reply; 5+ messages in thread
From: Naohiro Aota @ 2022-08-08  1:32 UTC (permalink / raw)
  To: linux-btrfs, stable; +Cc: Naohiro Aota, Johannes Thumshirn, David Sterba

commit b3a3b0255797e1d395253366ba24a4cc6c8bdf9c upstream

We have an optimization in do_zone_finish() to send REQ_OP_ZONE_FINISH only
when necessary, i.e. we don't send REQ_OP_ZONE_FINISH when we assume we
wrote fully into the zone.

The assumption is determined by "alloc_offset == capacity". This condition
won't work if the last ordered extent is canceled due to some errors. In
that case, we consider the zone is deactivated without sending the finish
command while it's still active.

This inconstancy results in activating another block group while we cannot
really activate the underlying zone, which causes the active zone exceeds
errors like below.

    BTRFS error (device nvme3n2): allocation failed flags 1, wanted 520192 tree-log 0, relocation: 0
    nvme3n2: I/O Cmd(0x7d) @ LBA 160432128, 127 blocks, I/O Error (sct 0x1 / sc 0xbd) MORE DNR
    active zones exceeded error, dev nvme3n2, sector 0 op 0xd:(ZONE_APPEND) flags 0x4800 phys_seg 1 prio class 0
    nvme3n2: I/O Cmd(0x7d) @ LBA 160432128, 127 blocks, I/O Error (sct 0x1 / sc 0xbd) MORE DNR
    active zones exceeded error, dev nvme3n2, sector 0 op 0xd:(ZONE_APPEND) flags 0x4800 phys_seg 1 prio class 0

Fix the issue by removing the optimization for now.

Fixes: 8376d9e1ed8f ("btrfs: zoned: finish superblock zone once no space left for new SB")
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/zoned.c | 22 +++++++++++++++++-----
 1 file changed, 17 insertions(+), 5 deletions(-)

diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 2c0851d94eff..b6b64da3422c 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -2039,13 +2039,25 @@ void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical, u64 len
 	spin_unlock(&block_group->lock);
 
 	map = block_group->physical_map;
-	device = map->stripes[0].dev;
-	physical = map->stripes[0].physical;
+	for (i = 0; i < map->num_stripes; i++) {
+		int ret;
 
-	if (!device->zone_info->max_active_zones)
-		goto out;
+		device = map->stripes[i].dev;
+		physical = map->stripes[i].physical;
 
-	btrfs_dev_clear_active_zone(device, physical);
+		if (device->zone_info->max_active_zones == 0)
+			continue;
+
+		ret = blkdev_zone_mgmt(device->bdev, REQ_OP_ZONE_FINISH,
+				       physical >> SECTOR_SHIFT,
+				       device->zone_info->zone_size >> SECTOR_SHIFT,
+				       GFP_NOFS);
+
+		if (ret)
+			return;
+
+		btrfs_dev_clear_active_zone(device, physical);
+	}
 
 	spin_lock(&fs_info->zone_active_bgs_lock);
 	ASSERT(!list_empty(&block_group->active_bg_list));
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH STABLE 5.18 3/3] btrfs: zoned: drop optimization of zone finish
  2022-08-08  1:32 ` [PATCH STABLE 5.18 3/3] btrfs: zoned: drop optimization of zone finish Naohiro Aota
@ 2022-08-08  2:00   ` Naohiro Aota
  0 siblings, 0 replies; 5+ messages in thread
From: Naohiro Aota @ 2022-08-08  2:00 UTC (permalink / raw)
  To: linux-btrfs, stable; +Cc: Johannes Thumshirn, David Sterba

Sorry. I forgot to amend a line adding "int i". I'll send v2.

On Mon, Aug 08, 2022 at 10:32:10AM +0900, Naohiro Aota wrote:
> commit b3a3b0255797e1d395253366ba24a4cc6c8bdf9c upstream
> 
> We have an optimization in do_zone_finish() to send REQ_OP_ZONE_FINISH only
> when necessary, i.e. we don't send REQ_OP_ZONE_FINISH when we assume we
> wrote fully into the zone.
> 
> The assumption is determined by "alloc_offset == capacity". This condition
> won't work if the last ordered extent is canceled due to some errors. In
> that case, we consider the zone is deactivated without sending the finish
> command while it's still active.
> 
> This inconstancy results in activating another block group while we cannot
> really activate the underlying zone, which causes the active zone exceeds
> errors like below.
> 
>     BTRFS error (device nvme3n2): allocation failed flags 1, wanted 520192 tree-log 0, relocation: 0
>     nvme3n2: I/O Cmd(0x7d) @ LBA 160432128, 127 blocks, I/O Error (sct 0x1 / sc 0xbd) MORE DNR
>     active zones exceeded error, dev nvme3n2, sector 0 op 0xd:(ZONE_APPEND) flags 0x4800 phys_seg 1 prio class 0
>     nvme3n2: I/O Cmd(0x7d) @ LBA 160432128, 127 blocks, I/O Error (sct 0x1 / sc 0xbd) MORE DNR
>     active zones exceeded error, dev nvme3n2, sector 0 op 0xd:(ZONE_APPEND) flags 0x4800 phys_seg 1 prio class 0
> 
> Fix the issue by removing the optimization for now.
> 
> Fixes: 8376d9e1ed8f ("btrfs: zoned: finish superblock zone once no space left for new SB")
> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
> Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
>  fs/btrfs/zoned.c | 22 +++++++++++++++++-----
>  1 file changed, 17 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
> index 2c0851d94eff..b6b64da3422c 100644
> --- a/fs/btrfs/zoned.c
> +++ b/fs/btrfs/zoned.c
> @@ -2039,13 +2039,25 @@ void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical, u64 len
>  	spin_unlock(&block_group->lock);
>  
>  	map = block_group->physical_map;
> -	device = map->stripes[0].dev;
> -	physical = map->stripes[0].physical;
> +	for (i = 0; i < map->num_stripes; i++) {
> +		int ret;
>  
> -	if (!device->zone_info->max_active_zones)
> -		goto out;
> +		device = map->stripes[i].dev;
> +		physical = map->stripes[i].physical;
>  
> -	btrfs_dev_clear_active_zone(device, physical);
> +		if (device->zone_info->max_active_zones == 0)
> +			continue;
> +
> +		ret = blkdev_zone_mgmt(device->bdev, REQ_OP_ZONE_FINISH,
> +				       physical >> SECTOR_SHIFT,
> +				       device->zone_info->zone_size >> SECTOR_SHIFT,
> +				       GFP_NOFS);
> +
> +		if (ret)
> +			return;
> +
> +		btrfs_dev_clear_active_zone(device, physical);
> +	}
>  
>  	spin_lock(&fs_info->zone_active_bgs_lock);
>  	ASSERT(!list_empty(&block_group->active_bg_list));
> -- 
> 2.35.1
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-08-08  2:23 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-08  1:32 [PATCH STABLE 5.18 0/3] btrfs: backport zoned mode fixes Naohiro Aota
2022-08-08  1:32 ` [PATCH STABLE 5.18 1/3] btrfs: zoned: prevent allocation from previous data relocation BG Naohiro Aota
2022-08-08  1:32 ` [PATCH STABLE 5.18 2/3] btrfs: zoned: fix critical section of relocation inode writeback Naohiro Aota
2022-08-08  1:32 ` [PATCH STABLE 5.18 3/3] btrfs: zoned: drop optimization of zone finish Naohiro Aota
2022-08-08  2:00   ` Naohiro Aota

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).