linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Naohiro Aota <naohiro.aota@wdc.com>
To: linux-btrfs@vger.kernel.org, dsterba@suse.com
Cc: hare@suse.com, linux-fsdevel@vger.kernel.org,
	Jens Axboe <axboe@kernel.dk>,
	Christoph Hellwig <hch@infradead.org>,
	"Darrick J. Wong" <darrick.wong@oracle.com>,
	Naohiro Aota <naohiro.aota@wdc.com>,
	Josef Bacik <josef@toxicpanda.com>
Subject: [PATCH v13 33/42] btrfs: mark block groups to copy for device-replace
Date: Fri, 22 Jan 2021 15:21:33 +0900	[thread overview]
Message-ID: <bac020aca6c98318b264db83aa6b80ef4fba9839.1611295439.git.naohiro.aota@wdc.com> (raw)
In-Reply-To: <cover.1611295439.git.naohiro.aota@wdc.com>

This is the 1/4 patch to support device-replace in ZONED mode.

We have two types of I/Os during the device-replace process. One is an I/O
to "copy" (by the scrub functions) all the device extents on the source
device to the destination device.  The other one is an I/O to "clone" (by
handle_ops_on_dev_replace()) new incoming write I/Os from users to the
source device into the target device.

Cloning incoming I/Os can break the sequential write rule in the target
device. When writing is mapped in the middle of a block group, the I/O is
directed in the middle of a target device zone, which breaks the sequential
write rule.

However, the cloning function cannot be merely disabled since incoming I/Os
targeting already copied device extents must be cloned so that the I/O is
executed on the target device.

We cannot use dev_replace->cursor_{left,right} to determine whether bio is
going to not yet copied region.  Since we have a time gap between finishing
btrfs_scrub_dev() and rewriting the mapping tree in
btrfs_dev_replace_finishing(), we can have a newly allocated device extent
which is never cloned nor copied.

So the point is to copy only already existing device extents. This patch
introduces mark_block_group_to_copy() to mark existing block groups as a
target of copying. Then, handle_ops_on_dev_replace() and dev-replace can
check the flag to do their job.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
 fs/btrfs/block-group.h |   1 +
 fs/btrfs/dev-replace.c | 182 +++++++++++++++++++++++++++++++++++++++++
 fs/btrfs/dev-replace.h |   3 +
 fs/btrfs/scrub.c       |  17 ++++
 4 files changed, 203 insertions(+)

diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
index 19a22bf930c6..3dec66ed36cb 100644
--- a/fs/btrfs/block-group.h
+++ b/fs/btrfs/block-group.h
@@ -95,6 +95,7 @@ struct btrfs_block_group {
 	unsigned int iref:1;
 	unsigned int has_caching_ctl:1;
 	unsigned int removed:1;
+	unsigned int to_copy:1;
 
 	int disk_cache_state;
 
diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
index bc73f798ce3a..b7f84fe45368 100644
--- a/fs/btrfs/dev-replace.c
+++ b/fs/btrfs/dev-replace.c
@@ -22,6 +22,7 @@
 #include "dev-replace.h"
 #include "sysfs.h"
 #include "zoned.h"
+#include "block-group.h"
 
 /*
  * Device replace overview
@@ -459,6 +460,183 @@ static char* btrfs_dev_name(struct btrfs_device *device)
 		return rcu_str_deref(device->name);
 }
 
+static int mark_block_group_to_copy(struct btrfs_fs_info *fs_info,
+				    struct btrfs_device *src_dev)
+{
+	struct btrfs_path *path;
+	struct btrfs_key key;
+	struct btrfs_key found_key;
+	struct btrfs_root *root = fs_info->dev_root;
+	struct btrfs_dev_extent *dev_extent = NULL;
+	struct btrfs_block_group *cache;
+	struct btrfs_trans_handle *trans;
+	int ret = 0;
+	u64 chunk_offset;
+
+	/* Do not use "to_copy" on non-ZONED for now */
+	if (!btrfs_is_zoned(fs_info))
+		return 0;
+
+	mutex_lock(&fs_info->chunk_mutex);
+
+	/* Ensure we don't have pending new block group */
+	spin_lock(&fs_info->trans_lock);
+	while (fs_info->running_transaction &&
+	       !list_empty(&fs_info->running_transaction->dev_update_list)) {
+		spin_unlock(&fs_info->trans_lock);
+		mutex_unlock(&fs_info->chunk_mutex);
+		trans = btrfs_attach_transaction(root);
+		if (IS_ERR(trans)) {
+			ret = PTR_ERR(trans);
+			mutex_lock(&fs_info->chunk_mutex);
+			if (ret == -ENOENT)
+				continue;
+			else
+				goto unlock;
+		}
+
+		ret = btrfs_commit_transaction(trans);
+		mutex_lock(&fs_info->chunk_mutex);
+		if (ret)
+			goto unlock;
+
+		spin_lock(&fs_info->trans_lock);
+	}
+	spin_unlock(&fs_info->trans_lock);
+
+	path = btrfs_alloc_path();
+	if (!path) {
+		ret = -ENOMEM;
+		goto unlock;
+	}
+
+	path->reada = READA_FORWARD;
+	path->search_commit_root = 1;
+	path->skip_locking = 1;
+
+	key.objectid = src_dev->devid;
+	key.offset = 0;
+	key.type = BTRFS_DEV_EXTENT_KEY;
+
+	ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
+	if (ret < 0)
+		goto free_path;
+	if (ret > 0) {
+		if (path->slots[0] >=
+		    btrfs_header_nritems(path->nodes[0])) {
+			ret = btrfs_next_leaf(root, path);
+			if (ret < 0)
+				goto free_path;
+			if (ret > 0) {
+				ret = 0;
+				goto free_path;
+			}
+		} else {
+			ret = 0;
+		}
+	}
+
+	while (1) {
+		struct extent_buffer *l = path->nodes[0];
+		int slot = path->slots[0];
+
+		btrfs_item_key_to_cpu(l, &found_key, slot);
+
+		if (found_key.objectid != src_dev->devid)
+			break;
+
+		if (found_key.type != BTRFS_DEV_EXTENT_KEY)
+			break;
+
+		if (found_key.offset < key.offset)
+			break;
+
+		dev_extent = btrfs_item_ptr(l, slot, struct btrfs_dev_extent);
+
+		chunk_offset = btrfs_dev_extent_chunk_offset(l, dev_extent);
+
+		cache = btrfs_lookup_block_group(fs_info, chunk_offset);
+		if (!cache)
+			goto skip;
+
+		spin_lock(&cache->lock);
+		cache->to_copy = 1;
+		spin_unlock(&cache->lock);
+
+		btrfs_put_block_group(cache);
+
+skip:
+		ret = btrfs_next_item(root, path);
+		if (ret != 0) {
+			if (ret > 0)
+				ret = 0;
+			break;
+		}
+	}
+
+free_path:
+	btrfs_free_path(path);
+unlock:
+	mutex_unlock(&fs_info->chunk_mutex);
+
+	return ret;
+}
+
+bool btrfs_finish_block_group_to_copy(struct btrfs_device *srcdev,
+				      struct btrfs_block_group *cache,
+				      u64 physical)
+{
+	struct btrfs_fs_info *fs_info = cache->fs_info;
+	struct extent_map *em;
+	struct map_lookup *map;
+	u64 chunk_offset = cache->start;
+	int num_extents, cur_extent;
+	int i;
+
+	/* Do not use "to_copy" on non-ZONED for now */
+	if (!btrfs_is_zoned(fs_info))
+		return true;
+
+	spin_lock(&cache->lock);
+	if (cache->removed) {
+		spin_unlock(&cache->lock);
+		return true;
+	}
+	spin_unlock(&cache->lock);
+
+	em = btrfs_get_chunk_map(fs_info, chunk_offset, 1);
+	ASSERT(!IS_ERR(em));
+	map = em->map_lookup;
+
+	num_extents = cur_extent = 0;
+	for (i = 0; i < map->num_stripes; i++) {
+		/* We have more device extent to copy */
+		if (srcdev != map->stripes[i].dev)
+			continue;
+
+		num_extents++;
+		if (physical == map->stripes[i].physical)
+			cur_extent = i;
+	}
+
+	free_extent_map(em);
+
+	if (num_extents > 1 && cur_extent < num_extents - 1) {
+		/*
+		 * Has more stripes on this device. Keep this BG
+		 * readonly until we finish all the stripes.
+		 */
+		return false;
+	}
+
+	/* Last stripe on this device */
+	spin_lock(&cache->lock);
+	cache->to_copy = 0;
+	spin_unlock(&cache->lock);
+
+	return true;
+}
+
 static int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
 		const char *tgtdev_name, u64 srcdevid, const char *srcdev_name,
 		int read_src)
@@ -500,6 +678,10 @@ static int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
 	if (ret)
 		return ret;
 
+	ret = mark_block_group_to_copy(fs_info, src_device);
+	if (ret)
+		return ret;
+
 	down_write(&dev_replace->rwsem);
 	switch (dev_replace->replace_state) {
 	case BTRFS_IOCTL_DEV_REPLACE_STATE_NEVER_STARTED:
diff --git a/fs/btrfs/dev-replace.h b/fs/btrfs/dev-replace.h
index 60b70dacc299..3911049a5f23 100644
--- a/fs/btrfs/dev-replace.h
+++ b/fs/btrfs/dev-replace.h
@@ -18,5 +18,8 @@ int btrfs_dev_replace_cancel(struct btrfs_fs_info *fs_info);
 void btrfs_dev_replace_suspend_for_unmount(struct btrfs_fs_info *fs_info);
 int btrfs_resume_dev_replace_async(struct btrfs_fs_info *fs_info);
 int __pure btrfs_dev_replace_is_ongoing(struct btrfs_dev_replace *dev_replace);
+bool btrfs_finish_block_group_to_copy(struct btrfs_device *srcdev,
+				      struct btrfs_block_group *cache,
+				      u64 physical);
 
 #endif
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 3a0a6b8ed6f2..b57c1184f330 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -3564,6 +3564,17 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
 		if (!cache)
 			goto skip;
 
+
+		if (sctx->is_dev_replace && btrfs_is_zoned(fs_info)) {
+			spin_lock(&cache->lock);
+			if (!cache->to_copy) {
+				spin_unlock(&cache->lock);
+				ro_set = 0;
+				goto done;
+			}
+			spin_unlock(&cache->lock);
+		}
+
 		/*
 		 * Make sure that while we are scrubbing the corresponding block
 		 * group doesn't get its logical address and its device extents
@@ -3695,6 +3706,12 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
 
 		scrub_pause_off(fs_info);
 
+		if (sctx->is_dev_replace &&
+		    !btrfs_finish_block_group_to_copy(dev_replace->srcdev,
+						      cache, found_key.offset))
+			ro_set = 0;
+
+done:
 		down_write(&dev_replace->rwsem);
 		dev_replace->cursor_left = dev_replace->cursor_right;
 		dev_replace->item_needs_writeback = 1;
-- 
2.27.0


  parent reply	other threads:[~2021-01-22  6:31 UTC|newest]

Thread overview: 62+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-22  6:21 [PATCH v13 00/42] btrfs: zoned block device support Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 01/42] block: add bio_add_zone_append_page Naohiro Aota
2021-01-23  2:50   ` Chaitanya Kulkarni
2021-01-23  3:08   ` Chaitanya Kulkarni
2021-01-25 17:31   ` Johannes Thumshirn
2021-01-22  6:21 ` [PATCH v13 02/42] iomap: support REQ_OP_ZONE_APPEND Naohiro Aota
2021-01-23  2:56   ` Chaitanya Kulkarni
2021-01-22  6:21 ` [PATCH v13 03/42] btrfs: defer loading zone info after opening trees Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 04/42] btrfs: use regular SB location on emulated zoned mode Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 05/42] btrfs: release path before calling into btrfs_load_block_group_zone_info Naohiro Aota
2021-01-22 14:57   ` Josef Bacik
2021-01-22  6:21 ` [PATCH v13 06/42] btrfs: do not load fs_info->zoned from incompat flag Naohiro Aota
2021-01-22 14:58   ` Josef Bacik
2021-01-22  6:21 ` [PATCH v13 07/42] btrfs: disallow fitrim in ZONED mode Naohiro Aota
2021-01-22 14:59   ` Josef Bacik
2021-01-22  6:21 ` [PATCH v13 08/42] btrfs: allow zoned mode on non-zoned block devices Naohiro Aota
2021-01-22 15:03   ` Josef Bacik
2021-01-22  6:21 ` [PATCH v13 09/42] btrfs: implement zoned chunk allocator Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 10/42] btrfs: verify device extent is aligned to zone Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 11/42] btrfs: load zone's allocation offset Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 12/42] btrfs: calculate allocation offset for conventional zones Naohiro Aota
2021-01-22 15:07   ` Josef Bacik
2021-01-22  6:21 ` [PATCH v13 13/42] btrfs: track unusable bytes for zones Naohiro Aota
2021-01-22 15:11   ` Josef Bacik
2021-01-25 10:37     ` Johannes Thumshirn
2021-01-25 12:08       ` Johannes Thumshirn
2021-01-22  6:21 ` [PATCH v13 14/42] btrfs: do sequential extent allocation in ZONED mode Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 15/42] btrfs: redirty released extent buffers " Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 16/42] btrfs: advance allocation pointer after tree log node Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 17/42] btrfs: enable to mount ZONED incompat flag Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 18/42] btrfs: reset zones of unused block groups Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 19/42] btrfs: extract page adding function Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 20/42] btrfs: use bio_add_zone_append_page for zoned btrfs Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 21/42] btrfs: handle REQ_OP_ZONE_APPEND as writing Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 22/42] btrfs: split ordered extent when bio is sent Naohiro Aota
2021-01-22 15:22   ` Josef Bacik
2021-01-25  8:56     ` Johannes Thumshirn
2021-01-25  9:02     ` Johannes Thumshirn
2021-01-22  6:21 ` [PATCH v13 23/42] btrfs: check if bio spans across an ordered extent Naohiro Aota
2021-01-22 15:23   ` Josef Bacik
2021-01-22  6:21 ` [PATCH v13 24/42] btrfs: extend btrfs_rmap_block for specifying a device Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 25/42] btrfs: cache if block-group is on a sequential zone Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 26/42] btrfs: save irq flags when looking up an ordered extent Naohiro Aota
2021-01-22 15:24   ` Josef Bacik
2021-01-22  6:21 ` [PATCH v13 27/42] btrfs: use ZONE_APPEND write for ZONED btrfs Naohiro Aota
2021-01-22 15:29   ` Josef Bacik
2021-01-22  6:21 ` [PATCH v13 28/42] btrfs: enable zone append writing for direct IO Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 29/42] btrfs: introduce dedicated data write path for ZONED mode Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 30/42] btrfs: serialize meta IOs on " Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 31/42] btrfs: wait existing extents before truncating Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 32/42] btrfs: avoid async metadata checksum on ZONED mode Naohiro Aota
2021-01-22  6:21 ` Naohiro Aota [this message]
2021-01-22  6:21 ` [PATCH v13 34/42] btrfs: implement cloning for ZONED device-replace Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 35/42] btrfs: implement copying " Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 36/42] btrfs: support dev-replace in ZONED mode Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 37/42] btrfs: enable relocation " Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 38/42] btrfs: relocate block group to repair IO failure in ZONED Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 39/42] btrfs: split alloc_log_tree() Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 40/42] btrfs: extend zoned allocator to use dedicated tree-log block group Naohiro Aota
2021-01-22  6:21 ` [PATCH v13 41/42] btrfs: serialize log transaction on ZONED mode Naohiro Aota
2021-01-22 15:37   ` Josef Bacik
2021-01-22  6:21 ` [PATCH v13 42/42] btrfs: reorder log node allocation Naohiro Aota

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bac020aca6c98318b264db83aa6b80ef4fba9839.1611295439.git.naohiro.aota@wdc.com \
    --to=naohiro.aota@wdc.com \
    --cc=axboe@kernel.dk \
    --cc=darrick.wong@oracle.com \
    --cc=dsterba@suse.com \
    --cc=hare@suse.com \
    --cc=hch@infradead.org \
    --cc=josef@toxicpanda.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).