All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] Support resize and device delete cancel ops
@ 2021-05-21 12:06 David Sterba
  2021-05-21 12:06 ` [PATCH 1/6] btrfs: protect exclusive_operation by super_lock David Sterba
                   ` (8 more replies)
  0 siblings, 9 replies; 29+ messages in thread
From: David Sterba @ 2021-05-21 12:06 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

We don't have a nice interface to cancel the resize or device deletion
from a command. Since recently, both commands can be interrupted by a
signal, which also means Ctrl-C from terminal, but given the long
history of absence of the commands I think this is not yet well known.

Examples:

  $ btrfs fi resize -10G /mnt
  ...
  $ btrfs fi resize cancel /mnt

  $ btrfs device delete /dev/sdx /mnt
  ...
  $ btrfs device delete cancel /mnt

The cancel request returns once the resize/delete command finishes
processing of the currently relocated chunk. The btrfs-progs needs to be
updated as well to skip checks of the sysfs exclusive_operation file
added in 5.10 (raw ioctl would work).

David Sterba (6):
  btrfs: protect exclusive_operation by super_lock
  btrfs: add cancelable chunk relocation support
  btrfs: introduce try-lock semantics for exclusive op start
  btrfs: add wrapper for conditional start of exclusive operation
  btrfs: add cancelation to resize
  btrfs: add device delete cancel

 fs/btrfs/ctree.h      |  16 +++-
 fs/btrfs/disk-io.c    |   1 +
 fs/btrfs/ioctl.c      | 174 ++++++++++++++++++++++++++++++++----------
 fs/btrfs/relocation.c |  60 ++++++++++++++-
 4 files changed, 207 insertions(+), 44 deletions(-)

-- 
2.29.2


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 1/6] btrfs: protect exclusive_operation by super_lock
  2021-05-21 12:06 [PATCH 0/6] Support resize and device delete cancel ops David Sterba
@ 2021-05-21 12:06 ` David Sterba
  2021-05-21 13:37   ` Josef Bacik
  2021-05-21 12:06 ` [PATCH 2/6] btrfs: add cancelable chunk relocation support David Sterba
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 29+ messages in thread
From: David Sterba @ 2021-05-21 12:06 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

The exclusive operation is now atomically checked and set using bit
operations. Switch it to protection by spinlock. The super block lock is
not frequently used and adding a new lock seems like an overkill so it
should be safe to reuse it.

The reason to use spinlock is to enhance the locking context so more
checks can be done, eg. allowing the same exclusive operation enter
the exclop section and cancel the running one. This will be used for
resize and device delete.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/ctree.h |  4 ++--
 fs/btrfs/ioctl.c | 16 +++++++++++++++-
 2 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 938d8ebf4cf3..a142e56b6b9a 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -992,8 +992,8 @@ struct btrfs_fs_info {
 	 */
 	int send_in_progress;
 
-	/* Type of exclusive operation running */
-	unsigned long exclusive_operation;
+	/* Type of exclusive operation running, protected by super_lock */
+	enum btrfs_exclusive_operation exclusive_operation;
 
 	/*
 	 * Zone size > 0 when in ZONED mode, otherwise it's used for a check
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index a7739461533d..c4e710ea08ba 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -353,15 +353,29 @@ int btrfs_fileattr_set(struct user_namespace *mnt_userns,
 	return ret;
 }
 
+/*
+ * Start exclusive operation @type, return true on success
+ */
 bool btrfs_exclop_start(struct btrfs_fs_info *fs_info,
 			enum btrfs_exclusive_operation type)
 {
-	return !cmpxchg(&fs_info->exclusive_operation, BTRFS_EXCLOP_NONE, type);
+	bool ret = false;
+
+	spin_lock(&fs_info->super_lock);
+	if (fs_info->exclusive_operation == BTRFS_EXCLOP_NONE) {
+		fs_info->exclusive_operation = type;
+		ret = true;
+	}
+	spin_unlock(&fs_info->super_lock);
+
+	return ret;
 }
 
 void btrfs_exclop_finish(struct btrfs_fs_info *fs_info)
 {
+	spin_lock(&fs_info->super_lock);
 	WRITE_ONCE(fs_info->exclusive_operation, BTRFS_EXCLOP_NONE);
+	spin_unlock(&fs_info->super_lock);
 	sysfs_notify(&fs_info->fs_devices->fsid_kobj, NULL, "exclusive_operation");
 }
 
-- 
2.29.2


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 2/6] btrfs: add cancelable chunk relocation support
  2021-05-21 12:06 [PATCH 0/6] Support resize and device delete cancel ops David Sterba
  2021-05-21 12:06 ` [PATCH 1/6] btrfs: protect exclusive_operation by super_lock David Sterba
@ 2021-05-21 12:06 ` David Sterba
  2021-05-21 13:21   ` Josef Bacik
  2021-06-16 13:54   ` Filipe Manana
  2021-05-21 12:06 ` [PATCH 3/6] btrfs: introduce try-lock semantics for exclusive op start David Sterba
                   ` (6 subsequent siblings)
  8 siblings, 2 replies; 29+ messages in thread
From: David Sterba @ 2021-05-21 12:06 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Add support code that will allow canceling relocation on the chunk
granularity. This is different and independent of balance, that also
uses relocation but is a higher level operation and manages it's own
state and pause/cancelation requests.

Relocation is used for resize (shrink) and device deletion so this will
be a common point to implement cancelation for both. The context is
entirely in btrfs_relocate_block_group and btrfs_recover_relocation,
enclosing one chunk relocation. The status bit is set and unset between
the chunks. As relocation can take long, the effects may not be
immediate and the request and actual action can slightly race.

The fs_info::reloc_cancel_req is only supposed to be increased and does
not pair with decrement like fs_info::balance_cancel_req.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/ctree.h      |  9 +++++++
 fs/btrfs/disk-io.c    |  1 +
 fs/btrfs/relocation.c | 60 ++++++++++++++++++++++++++++++++++++++++++-
 3 files changed, 69 insertions(+), 1 deletion(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index a142e56b6b9a..3dfc32a3ebab 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -565,6 +565,12 @@ enum {
 	 */
 	BTRFS_FS_BALANCE_RUNNING,
 
+	/*
+	 * Indicate that relocation of a chunk has started, it's set per chunk
+	 * and is toggled between chunks.
+	 */
+	BTRFS_FS_RELOC_RUNNING,
+
 	/* Indicate that the cleaner thread is awake and doing something. */
 	BTRFS_FS_CLEANER_RUNNING,
 
@@ -871,6 +877,9 @@ struct btrfs_fs_info {
 	struct btrfs_balance_control *balance_ctl;
 	wait_queue_head_t balance_wait_q;
 
+	/* Cancelation requests for chunk relocation */
+	atomic_t reloc_cancel_req;
+
 	u32 data_chunk_allocations;
 	u32 metadata_ratio;
 
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 8c3db9076988..93c994b78d61 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -2251,6 +2251,7 @@ static void btrfs_init_balance(struct btrfs_fs_info *fs_info)
 	atomic_set(&fs_info->balance_cancel_req, 0);
 	fs_info->balance_ctl = NULL;
 	init_waitqueue_head(&fs_info->balance_wait_q);
+	atomic_set(&fs_info->reloc_cancel_req, 0);
 }
 
 static void btrfs_init_btree_inode(struct btrfs_fs_info *fs_info)
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index b70be2ac2e9e..9b84eb86e426 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -2876,11 +2876,12 @@ int setup_extent_mapping(struct inode *inode, u64 start, u64 end,
 }
 
 /*
- * Allow error injection to test balance cancellation
+ * Allow error injection to test balance/relocation cancellation
  */
 noinline int btrfs_should_cancel_balance(struct btrfs_fs_info *fs_info)
 {
 	return atomic_read(&fs_info->balance_cancel_req) ||
+		atomic_read(&fs_info->reloc_cancel_req) ||
 		fatal_signal_pending(current);
 }
 ALLOW_ERROR_INJECTION(btrfs_should_cancel_balance, TRUE);
@@ -3780,6 +3781,47 @@ struct inode *create_reloc_inode(struct btrfs_fs_info *fs_info,
 	return inode;
 }
 
+/*
+ * Mark start of chunk relocation that is cancelable. Check if the cancelation
+ * has been requested meanwhile and don't start in that case.
+ *
+ * Return:
+ *   0             success
+ *   -EINPROGRESS  operation is already in progress, that's probably a bug
+ *   -ECANCELED    cancelation request was set before the operation started
+ */
+static int reloc_chunk_start(struct btrfs_fs_info *fs_info)
+{
+	if (test_and_set_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags)) {
+		/* This should not happen */
+		btrfs_err(fs_info, "reloc already running, cannot start");
+		return -EINPROGRESS;
+	}
+
+	if (atomic_read(&fs_info->reloc_cancel_req) > 0) {
+		btrfs_info(fs_info, "chunk relocation canceled on start");
+		/*
+		 * On cancel, clear all requests but let the caller mark
+		 * the end after cleanup operations.
+		 */
+		atomic_set(&fs_info->reloc_cancel_req, 0);
+		return -ECANCELED;
+	}
+	return 0;
+}
+
+/*
+ * Mark end of chunk relocation that is cancelable and wake any waiters.
+ */
+static void reloc_chunk_end(struct btrfs_fs_info *fs_info)
+{
+	/* Requested after start, clear bit first so any waiters can continue */
+	if (atomic_read(&fs_info->reloc_cancel_req) > 0)
+		btrfs_info(fs_info, "chunk relocation canceled during operation");
+	clear_and_wake_up_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags);
+	atomic_set(&fs_info->reloc_cancel_req, 0);
+}
+
 static struct reloc_control *alloc_reloc_control(struct btrfs_fs_info *fs_info)
 {
 	struct reloc_control *rc;
@@ -3862,6 +3904,12 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
 		return -ENOMEM;
 	}
 
+	ret = reloc_chunk_start(fs_info);
+	if (ret < 0) {
+		err = ret;
+		goto out_end;
+	}
+
 	rc->extent_root = extent_root;
 	rc->block_group = bg;
 
@@ -3953,6 +4001,8 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
 		btrfs_dec_block_group_ro(rc->block_group);
 	iput(rc->data_inode);
 	btrfs_put_block_group(rc->block_group);
+out_end:
+	reloc_chunk_end(fs_info);
 	free_reloc_control(rc);
 	return err;
 }
@@ -4073,6 +4123,12 @@ int btrfs_recover_relocation(struct btrfs_root *root)
 		goto out;
 	}
 
+	ret = reloc_chunk_start(fs_info);
+	if (ret < 0) {
+		err = ret;
+		goto out_end;
+	}
+
 	rc->extent_root = fs_info->extent_root;
 
 	set_reloc_control(rc);
@@ -4137,6 +4193,8 @@ int btrfs_recover_relocation(struct btrfs_root *root)
 		err = ret;
 out_unset:
 	unset_reloc_control(rc);
+out_end:
+	reloc_chunk_end(fs_info);
 	free_reloc_control(rc);
 out:
 	free_reloc_roots(&reloc_roots);
-- 
2.29.2


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 3/6] btrfs: introduce try-lock semantics for exclusive op start
  2021-05-21 12:06 [PATCH 0/6] Support resize and device delete cancel ops David Sterba
  2021-05-21 12:06 ` [PATCH 1/6] btrfs: protect exclusive_operation by super_lock David Sterba
  2021-05-21 12:06 ` [PATCH 2/6] btrfs: add cancelable chunk relocation support David Sterba
@ 2021-05-21 12:06 ` David Sterba
  2021-05-21 13:38   ` Josef Bacik
  2021-05-27  7:43   ` Anand Jain
  2021-05-21 12:06 ` [PATCH 4/6] btrfs: add wrapper for conditional start of exclusive operation David Sterba
                   ` (5 subsequent siblings)
  8 siblings, 2 replies; 29+ messages in thread
From: David Sterba @ 2021-05-21 12:06 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Add try-lock for exclusive operation start to allow callers to do more
checks. The same operation must already be running. The try-lock and
unlock must pair and are a substitute for btrfs_exclop_start, thus it
must also pair with btrfs_exclop_finish to release the exclop context.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/ctree.h |  3 +++
 fs/btrfs/ioctl.c | 26 ++++++++++++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 3dfc32a3ebab..0dffc06b5ad4 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -3231,6 +3231,9 @@ void btrfs_update_ioctl_balance_args(struct btrfs_fs_info *fs_info,
 			       struct btrfs_ioctl_balance_args *bargs);
 bool btrfs_exclop_start(struct btrfs_fs_info *fs_info,
 			enum btrfs_exclusive_operation type);
+bool btrfs_exclop_start_try_lock(struct btrfs_fs_info *fs_info,
+				 enum btrfs_exclusive_operation type);
+void btrfs_exclop_start_unlock(struct btrfs_fs_info *fs_info);
 void btrfs_exclop_finish(struct btrfs_fs_info *fs_info);
 
 /* file.c */
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index c4e710ea08ba..cacd6ee17d8e 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -371,6 +371,32 @@ bool btrfs_exclop_start(struct btrfs_fs_info *fs_info,
 	return ret;
 }
 
+/*
+ * Conditionally allow to enter the exclusive operation in case it's compatible
+ * with the running one.  This must be paired with btrfs_exclop_start_unlock and
+ * btrfs_exclop_finish.
+ *
+ * Compatibility:
+ * - the same type is already running
+ * - not BTRFS_EXCLOP_NONE - this is intentionally incompatible and the caller
+ *   must check the condition first that would allow none -> @type
+ */
+bool btrfs_exclop_start_try_lock(struct btrfs_fs_info *fs_info,
+				 enum btrfs_exclusive_operation type)
+{
+	spin_lock(&fs_info->super_lock);
+	if (fs_info->exclusive_operation == type)
+		return true;
+
+	spin_unlock(&fs_info->super_lock);
+	return false;
+}
+
+void btrfs_exclop_start_unlock(struct btrfs_fs_info *fs_info)
+{
+	spin_unlock(&fs_info->super_lock);
+}
+
 void btrfs_exclop_finish(struct btrfs_fs_info *fs_info)
 {
 	spin_lock(&fs_info->super_lock);
-- 
2.29.2


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 4/6] btrfs: add wrapper for conditional start of exclusive operation
  2021-05-21 12:06 [PATCH 0/6] Support resize and device delete cancel ops David Sterba
                   ` (2 preceding siblings ...)
  2021-05-21 12:06 ` [PATCH 3/6] btrfs: introduce try-lock semantics for exclusive op start David Sterba
@ 2021-05-21 12:06 ` David Sterba
  2021-05-21 13:29   ` Josef Bacik
  2021-05-21 12:06 ` [PATCH 5/6] btrfs: add cancelation to resize David Sterba
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 29+ messages in thread
From: David Sterba @ 2021-05-21 12:06 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

To support optional cancelation of some operations, add helper that will
wrap all the combinations. In normal mode it's same as
btrfs_exclop_start, in cancelation mode it checks if it's already
running and request cancelation and waits until completion.

The error codes can be returned to to user space and semantics is not
changed, adding ECANCELED. This should be evaluated as an error and that
the operation has not completed and the operation should be restarted
or the filesystem status reviewed.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/ioctl.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index cacd6ee17d8e..c75ccadf23dc 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -1600,6 +1600,48 @@ int btrfs_defrag_file(struct inode *inode, struct file *file,
 	return ret;
 }
 
+/*
+ * Try to start exclusive operation @type or cancel it if it's running.
+ *
+ * Return:
+ *   0        - normal mode, newly claimed op started
+ *  >0        - normal mode, something else is running,
+ *              return BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS to user space
+ * ECANCELED  - cancel mode, successful cancel
+ * ENOTCONN   - cancel mode, operation not running anymore
+ */
+static int exclop_start_or_cancel_reloc(struct btrfs_fs_info *fs_info,
+			enum btrfs_exclusive_operation type, bool cancel)
+{
+	if (!cancel) {
+		/* Start normal op */
+		if (!btrfs_exclop_start(fs_info, type))
+			return BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS;
+		/* Exclusive operation is now claimed */
+		return 0;
+	}
+
+	/* Cancel running op */
+	if (btrfs_exclop_start_try_lock(fs_info, type)) {
+		/*
+		 * This blocks any exclop finish from setting it to NONE, so we
+		 * request cancelation. Either it runs and we will wait for it,
+		 * or it has finished and no waiting will happen.
+		 */
+		atomic_inc(&fs_info->reloc_cancel_req);
+		btrfs_exclop_start_unlock(fs_info);
+
+		if (test_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags))
+			wait_on_bit(&fs_info->flags, BTRFS_FS_RELOC_RUNNING,
+				    TASK_INTERRUPTIBLE);
+
+		return -ECANCELED;
+	}
+
+	/* Something else is running or none */
+	return -ENOTCONN;
+}
+
 static noinline int btrfs_ioctl_resize(struct file *file,
 					void __user *arg)
 {
-- 
2.29.2


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 5/6] btrfs: add cancelation to resize
  2021-05-21 12:06 [PATCH 0/6] Support resize and device delete cancel ops David Sterba
                   ` (3 preceding siblings ...)
  2021-05-21 12:06 ` [PATCH 4/6] btrfs: add wrapper for conditional start of exclusive operation David Sterba
@ 2021-05-21 12:06 ` David Sterba
  2021-05-21 13:38   ` Josef Bacik
  2021-05-21 12:06 ` [PATCH 6/6] btrfs: add device delete cancel David Sterba
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 29+ messages in thread
From: David Sterba @ 2021-05-21 12:06 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Accept literal string "cancel" as resize operation and interpret that
as a request to cancel the running operation. If it's running, wait
until it finishes current work and return ECANCELED.

Shrinking resize uses relocation to move the chunks away, use the
conditional exclusive operation start and cancelation helpers.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/ioctl.c | 47 ++++++++++++++++++++++++++---------------------
 1 file changed, 26 insertions(+), 21 deletions(-)

diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index c75ccadf23dc..8be2ca762894 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -1659,6 +1659,7 @@ static noinline int btrfs_ioctl_resize(struct file *file,
 	char *devstr = NULL;
 	int ret = 0;
 	int mod = 0;
+	bool cancel;
 
 	if (!capable(CAP_SYS_ADMIN))
 		return -EPERM;
@@ -1667,20 +1668,23 @@ static noinline int btrfs_ioctl_resize(struct file *file,
 	if (ret)
 		return ret;
 
-	if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_RESIZE)) {
-		mnt_drop_write_file(file);
-		return BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS;
-	}
-
+	/*
+	 * Read the arguments before checking exclusivity to be able to
+	 * distinguish regular resize and cancel
+	 */
 	vol_args = memdup_user(arg, sizeof(*vol_args));
 	if (IS_ERR(vol_args)) {
 		ret = PTR_ERR(vol_args);
-		goto out;
+		goto out_drop;
 	}
-
 	vol_args->name[BTRFS_PATH_NAME_MAX] = '\0';
-
 	sizestr = vol_args->name;
+	cancel = (strcmp("cancel", sizestr) == 0);
+	ret = exclop_start_or_cancel_reloc(fs_info, BTRFS_EXCLOP_RESIZE, cancel);
+	if (ret)
+		goto out_free;
+	/* Exclusive operation is now claimed */
+
 	devstr = strchr(sizestr, ':');
 	if (devstr) {
 		sizestr = devstr + 1;
@@ -1688,10 +1692,10 @@ static noinline int btrfs_ioctl_resize(struct file *file,
 		devstr = vol_args->name;
 		ret = kstrtoull(devstr, 10, &devid);
 		if (ret)
-			goto out_free;
+			goto out_finish;
 		if (!devid) {
 			ret = -EINVAL;
-			goto out_free;
+			goto out_finish;
 		}
 		btrfs_info(fs_info, "resizing devid %llu", devid);
 	}
@@ -1701,7 +1705,7 @@ static noinline int btrfs_ioctl_resize(struct file *file,
 		btrfs_info(fs_info, "resizer unable to find device %llu",
 			   devid);
 		ret = -ENODEV;
-		goto out_free;
+		goto out_finish;
 	}
 
 	if (!test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state)) {
@@ -1709,7 +1713,7 @@ static noinline int btrfs_ioctl_resize(struct file *file,
 			   "resizer unable to apply on readonly device %llu",
 		       devid);
 		ret = -EPERM;
-		goto out_free;
+		goto out_finish;
 	}
 
 	if (!strcmp(sizestr, "max"))
@@ -1725,13 +1729,13 @@ static noinline int btrfs_ioctl_resize(struct file *file,
 		new_size = memparse(sizestr, &retptr);
 		if (*retptr != '\0' || new_size == 0) {
 			ret = -EINVAL;
-			goto out_free;
+			goto out_finish;
 		}
 	}
 
 	if (test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state)) {
 		ret = -EPERM;
-		goto out_free;
+		goto out_finish;
 	}
 
 	old_size = btrfs_device_get_total_bytes(device);
@@ -1739,24 +1743,24 @@ static noinline int btrfs_ioctl_resize(struct file *file,
 	if (mod < 0) {
 		if (new_size > old_size) {
 			ret = -EINVAL;
-			goto out_free;
+			goto out_finish;
 		}
 		new_size = old_size - new_size;
 	} else if (mod > 0) {
 		if (new_size > ULLONG_MAX - old_size) {
 			ret = -ERANGE;
-			goto out_free;
+			goto out_finish;
 		}
 		new_size = old_size + new_size;
 	}
 
 	if (new_size < SZ_256M) {
 		ret = -EINVAL;
-		goto out_free;
+		goto out_finish;
 	}
 	if (new_size > device->bdev->bd_inode->i_size) {
 		ret = -EFBIG;
-		goto out_free;
+		goto out_finish;
 	}
 
 	new_size = round_down(new_size, fs_info->sectorsize);
@@ -1765,7 +1769,7 @@ static noinline int btrfs_ioctl_resize(struct file *file,
 		trans = btrfs_start_transaction(root, 0);
 		if (IS_ERR(trans)) {
 			ret = PTR_ERR(trans);
-			goto out_free;
+			goto out_finish;
 		}
 		ret = btrfs_grow_device(trans, device, new_size);
 		btrfs_commit_transaction(trans);
@@ -1778,10 +1782,11 @@ static noinline int btrfs_ioctl_resize(struct file *file,
 			"resize device %s (devid %llu) from %llu to %llu",
 			rcu_str_deref(device->name), device->devid,
 			old_size, new_size);
+out_finish:
+	btrfs_exclop_finish(fs_info);
 out_free:
 	kfree(vol_args);
-out:
-	btrfs_exclop_finish(fs_info);
+out_drop:
 	mnt_drop_write_file(file);
 	return ret;
 }
-- 
2.29.2


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 6/6] btrfs: add device delete cancel
  2021-05-21 12:06 [PATCH 0/6] Support resize and device delete cancel ops David Sterba
                   ` (4 preceding siblings ...)
  2021-05-21 12:06 ` [PATCH 5/6] btrfs: add cancelation to resize David Sterba
@ 2021-05-21 12:06 ` David Sterba
  2021-05-21 13:38   ` Josef Bacik
  2021-05-21 12:06 ` [PATCH 1/2] btrfs-progs: device remove: add support for cancel David Sterba
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 29+ messages in thread
From: David Sterba @ 2021-05-21 12:06 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Accept device name "cancel" as a request to cancel running device
deletion operation. The string is literal, in case there's a real device
named "cancel", pass it as full absolute path or as "./cancel"

This works for v1 and v2 ioctls when the device is specified by name.
Moving chunks from the device uses relocation, use the conditional
exclusive operation start and cancelation helpers

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/ioctl.c | 43 ++++++++++++++++++++++++-------------------
 1 file changed, 24 insertions(+), 19 deletions(-)

diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index 8be2ca762894..bd56e57c943d 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -3206,6 +3206,7 @@ static long btrfs_ioctl_rm_dev_v2(struct file *file, void __user *arg)
 	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
 	struct btrfs_ioctl_vol_args_v2 *vol_args;
 	int ret;
+	bool cancel = false;
 
 	if (!capable(CAP_SYS_ADMIN))
 		return -EPERM;
@@ -3224,18 +3225,22 @@ static long btrfs_ioctl_rm_dev_v2(struct file *file, void __user *arg)
 		ret = -EOPNOTSUPP;
 		goto out;
 	}
+	vol_args->name[BTRFS_SUBVOL_NAME_MAX] = '\0';
+	if (!(vol_args->flags & BTRFS_DEVICE_SPEC_BY_ID) &&
+	    strcmp("cancel", vol_args->name) == 0)
+		cancel = true;
 
-	if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_DEV_REMOVE)) {
-		ret = BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS;
+	ret = exclop_start_or_cancel_reloc(fs_info, BTRFS_EXCLOP_DEV_REMOVE,
+					   cancel);
+	if (ret)
 		goto out;
-	}
+	/* Exclusive operation is now claimed */
 
-	if (vol_args->flags & BTRFS_DEVICE_SPEC_BY_ID) {
+	if (vol_args->flags & BTRFS_DEVICE_SPEC_BY_ID)
 		ret = btrfs_rm_device(fs_info, NULL, vol_args->devid);
-	} else {
-		vol_args->name[BTRFS_SUBVOL_NAME_MAX] = '\0';
+	else
 		ret = btrfs_rm_device(fs_info, vol_args->name, 0);
-	}
+
 	btrfs_exclop_finish(fs_info);
 
 	if (!ret) {
@@ -3259,6 +3264,7 @@ static long btrfs_ioctl_rm_dev(struct file *file, void __user *arg)
 	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
 	struct btrfs_ioctl_vol_args *vol_args;
 	int ret;
+	bool cancel;
 
 	if (!capable(CAP_SYS_ADMIN))
 		return -EPERM;
@@ -3267,25 +3273,24 @@ static long btrfs_ioctl_rm_dev(struct file *file, void __user *arg)
 	if (ret)
 		return ret;
 
-	if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_DEV_REMOVE)) {
-		ret = BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS;
-		goto out_drop_write;
-	}
-
 	vol_args = memdup_user(arg, sizeof(*vol_args));
 	if (IS_ERR(vol_args)) {
 		ret = PTR_ERR(vol_args);
-		goto out;
+		goto out_drop_write;
 	}
-
 	vol_args->name[BTRFS_PATH_NAME_MAX] = '\0';
-	ret = btrfs_rm_device(fs_info, vol_args->name, 0);
+	cancel = (strcmp("cancel", vol_args->name) == 0);
+
+	ret = exclop_start_or_cancel_reloc(fs_info, BTRFS_EXCLOP_DEV_REMOVE,
+					   cancel);
+	if (ret == 0) {
+		ret = btrfs_rm_device(fs_info, vol_args->name, 0);
+		if (!ret)
+			btrfs_info(fs_info, "disk deleted %s", vol_args->name);
+		btrfs_exclop_finish(fs_info);
+	}
 
-	if (!ret)
-		btrfs_info(fs_info, "disk deleted %s", vol_args->name);
 	kfree(vol_args);
-out:
-	btrfs_exclop_finish(fs_info);
 out_drop_write:
 	mnt_drop_write_file(file);
 
-- 
2.29.2


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 1/2] btrfs-progs: device remove: add support for cancel
  2021-05-21 12:06 [PATCH 0/6] Support resize and device delete cancel ops David Sterba
                   ` (5 preceding siblings ...)
  2021-05-21 12:06 ` [PATCH 6/6] btrfs: add device delete cancel David Sterba
@ 2021-05-21 12:06 ` David Sterba
  2021-05-21 12:06 ` [PATCH 2/2] btrfs-progs: fi resize: " David Sterba
  2021-12-14 14:49 ` [PATCH 0/6] Support resize and device delete cancel ops Anand Jain
  8 siblings, 0 replies; 29+ messages in thread
From: David Sterba @ 2021-05-21 12:06 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Recognize special name 'cancel' for device deletion, that will request
kernel to stop running device deletion. This needs support in kernel,
otherwise this will fail due to another exclusive operation running
(though could be the same one).

The command returns after kernel finishes any work that got interrupted,
but this should not take long in kernels 5.10+ that allow interruptible
relocation. The waiting inside kernel is interruptible so this command
(and the waiting stage) can be interrupted.

The device size is restored when deletion does not finish but it's
recommended to review the filesystem state.

Note: in kernels 5.10+ sending a fatal signal (TERM, KILL, Ctrl-C) to
the process running the device deletion will cancel it too.

Example:

    $ btrfs device delete /dev/sdx /mnt
    ...
    $ btrfs device delete cancel /mnt

Signed-off-by: David Sterba <dsterba@suse.com>
---
 cmds/device.c | 42 +++++++++++++++++++++++++++++++++---------
 1 file changed, 33 insertions(+), 9 deletions(-)

diff --git a/cmds/device.c b/cmds/device.c
index 4d1276b949b9..48067101fa7d 100644
--- a/cmds/device.c
+++ b/cmds/device.c
@@ -194,6 +194,7 @@ static int _cmd_device_remove(const struct cmd_struct *cmd,
 	int i, fdmnt, ret = 0;
 	DIR	*dirstream = NULL;
 	bool enqueue = false;
+	bool cancel = false;
 
 	optind = 0;
 	while (1) {
@@ -225,12 +226,30 @@ static int _cmd_device_remove(const struct cmd_struct *cmd,
 	if (fdmnt < 0)
 		return 1;
 
-	ret = check_running_fs_exclop(fdmnt, BTRFS_EXCLOP_DEV_REMOVE, enqueue);
-	if (ret != 0) {
-		if (ret < 0)
-			error("unable to check status of exclusive operation: %m");
-		close_file_or_dir(fdmnt, dirstream);
-		return 1;
+	/* Scan device arguments for 'cancel', that must be the only "device" */
+	for (i = optind; i < argc - 1; i++) {
+		if (cancel) {
+			error("cancel requested but another device specified: %s\n",
+				argv[i]);
+			close_file_or_dir(fdmnt, dirstream);
+			return 1;
+		}
+		if (strcmp("cancel", argv[i]) == 0) {
+			cancel = true;
+			printf("Request to cancel running device deletion\n");
+		}
+	}
+
+	if (!cancel) {
+		ret = check_running_fs_exclop(fdmnt, BTRFS_EXCLOP_DEV_REMOVE,
+					      enqueue);
+		if (ret != 0) {
+			if (ret < 0)
+				error(
+			"unable to check status of exclusive operation: %m");
+			close_file_or_dir(fdmnt, dirstream);
+			return 1;
+		}
 	}
 
 	for(i = optind; i < argc - 1; i++) {
@@ -243,8 +262,9 @@ static int _cmd_device_remove(const struct cmd_struct *cmd,
 			argv2.devid = arg_strtou64(argv[i]);
 			argv2.flags = BTRFS_DEVICE_SPEC_BY_ID;
 			is_devid = 1;
-		} else if (path_is_block_device(argv[i]) == 1 ||
-				strcmp(argv[i], "missing") == 0) {
+		} else if (strcmp(argv[i], "missing") == 0 ||
+			   cancel ||
+			   path_is_block_device(argv[i]) == 1) {
 			strncpy_null(argv2.name, argv[i]);
 		} else {
 			error("not a block device: %s", argv[i]);
@@ -303,7 +323,11 @@ static int _cmd_device_remove(const struct cmd_struct *cmd,
 	"the device.",								\
 	"If 'missing' is specified for <device>, the first device that is",	\
 	"described by the filesystem metadata, but not present at the mount",	\
-	"time will be removed. (only in degraded mode)"
+	"time will be removed. (only in degraded mode)",			\
+	"If 'cancel' is specified as the only device to delete, request cancelation", \
+	"of a previously started device deletion and wait until kernel finishes", \
+	"any pending work. This will not delete the device and the size will be", \
+	"restored to previous state. When deletion is not running, this will fail."
 
 static const char * const cmd_device_remove_usage[] = {
 	"btrfs device remove <device>|<devid> [<device>|<devid>...] <path>",
-- 
2.29.2


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 2/2] btrfs-progs: fi resize: add support for cancel
  2021-05-21 12:06 [PATCH 0/6] Support resize and device delete cancel ops David Sterba
                   ` (6 preceding siblings ...)
  2021-05-21 12:06 ` [PATCH 1/2] btrfs-progs: device remove: add support for cancel David Sterba
@ 2021-05-21 12:06 ` David Sterba
  2021-12-14 14:49 ` [PATCH 0/6] Support resize and device delete cancel ops Anand Jain
  8 siblings, 0 replies; 29+ messages in thread
From: David Sterba @ 2021-05-21 12:06 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Recognize special resize amount 'cancel' for resize operation.  This
will request kernel to stop running any resize operation (most likely
shrinking resize). This needs support in kernel, otherwise this will
fail due to another exclusive operation running (though could be the
same one).

The command returns after kernel finishes any work that got interrupted,
but this should not take long in kernels 5.10+ that allow interruptible
relocation. The waiting inside kernel is interruptible so this command
(and the waiting stage) can be interrupted.

The resize operation could relocate block groups but the nominal
filesystem size will be restored when resize won't finish. It's
recommended to review the filesystem state.

Note: in kernels 5.10+ sending a fatal signal (TERM, KILL, Ctrl-C) to
the process running the resize will cancel it too.

Example:

  $ btrfs fi resize -10G /mnt
  ...
  $ btrfs fi resize cancel /mnt

Signed-off-by: David Sterba <dsterba@suse.com>
---
 cmds/filesystem.c | 27 +++++++++++++++++++++------
 1 file changed, 21 insertions(+), 6 deletions(-)

diff --git a/cmds/filesystem.c b/cmds/filesystem.c
index 4f123d25320f..db8433ba3542 100644
--- a/cmds/filesystem.c
+++ b/cmds/filesystem.c
@@ -1151,6 +1151,10 @@ static int check_resize_args(const char *amount, const char *path) {
 
 	if (strcmp(sizestr, "max") == 0) {
 		res_str = "max";
+	} else if (strcmp(sizestr, "cancel") == 0) {
+		/* Different format, print and exit */
+		printf("Request to cancel resize\n");
+		goto out;
 	} else {
 		if (sizestr[0] == '-') {
 			mod = -1;
@@ -1211,6 +1215,7 @@ static int cmd_filesystem_resize(const struct cmd_struct *cmd,
 	DIR	*dirstream = NULL;
 	int ret;
 	bool enqueue = false;
+	bool cancel = false;
 
 	/*
 	 * Simplified option parser, accept only long options, the resize value
@@ -1242,6 +1247,8 @@ static int cmd_filesystem_resize(const struct cmd_struct *cmd,
 		return 1;
 	}
 
+	cancel = (strcmp("cancel", amount) == 0);
+
 	fd = btrfs_open_dir(path, &dirstream, 1);
 	if (fd < 0) {
 		/* The path is a directory */
@@ -1254,12 +1261,20 @@ static int cmd_filesystem_resize(const struct cmd_struct *cmd,
 		return 1;
 	}
 
-	ret = check_running_fs_exclop(fd, BTRFS_EXCLOP_RESIZE, enqueue);
-	if (ret != 0) {
-		if (ret < 0)
-			error("unable to check status of exclusive operation: %m");
-		close_file_or_dir(fd, dirstream);
-		return 1;
+	/*
+	 * Check if there's an exclusive operation running if possible, otherwise
+	 * let kernel handle it. Cancel request is completely handled in kernel
+	 * so make it pass.
+	 */
+	if (!cancel) {
+		ret = check_running_fs_exclop(fd, BTRFS_EXCLOP_RESIZE, enqueue);
+		if (ret != 0) {
+			if (ret < 0)
+				error(
+			"unable to check status of exclusive operation: %m");
+			close_file_or_dir(fd, dirstream);
+			return 1;
+		}
 	}
 
 	ret = check_resize_args(amount, path);
-- 
2.29.2


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] btrfs: add cancelable chunk relocation support
  2021-05-21 12:06 ` [PATCH 2/6] btrfs: add cancelable chunk relocation support David Sterba
@ 2021-05-21 13:21   ` Josef Bacik
  2021-05-26 22:56     ` David Sterba
  2021-06-16 13:54   ` Filipe Manana
  1 sibling, 1 reply; 29+ messages in thread
From: Josef Bacik @ 2021-05-21 13:21 UTC (permalink / raw)
  To: David Sterba, linux-btrfs

On 5/21/21 8:06 AM, David Sterba wrote:
> Add support code that will allow canceling relocation on the chunk
> granularity. This is different and independent of balance, that also
> uses relocation but is a higher level operation and manages it's own
> state and pause/cancelation requests.
> 
> Relocation is used for resize (shrink) and device deletion so this will
> be a common point to implement cancelation for both. The context is
> entirely in btrfs_relocate_block_group and btrfs_recover_relocation,
> enclosing one chunk relocation. The status bit is set and unset between
> the chunks. As relocation can take long, the effects may not be
> immediate and the request and actual action can slightly race.
> 
> The fs_info::reloc_cancel_req is only supposed to be increased and does
> not pair with decrement like fs_info::balance_cancel_req.
> 
> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
>   fs/btrfs/ctree.h      |  9 +++++++
>   fs/btrfs/disk-io.c    |  1 +
>   fs/btrfs/relocation.c | 60 ++++++++++++++++++++++++++++++++++++++++++-
>   3 files changed, 69 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
> index a142e56b6b9a..3dfc32a3ebab 100644
> --- a/fs/btrfs/ctree.h
> +++ b/fs/btrfs/ctree.h
> @@ -565,6 +565,12 @@ enum {
>   	 */
>   	BTRFS_FS_BALANCE_RUNNING,
>   
> +	/*
> +	 * Indicate that relocation of a chunk has started, it's set per chunk
> +	 * and is toggled between chunks.
> +	 */
> +	BTRFS_FS_RELOC_RUNNING,
> +
>   	/* Indicate that the cleaner thread is awake and doing something. */
>   	BTRFS_FS_CLEANER_RUNNING,
>   
> @@ -871,6 +877,9 @@ struct btrfs_fs_info {
>   	struct btrfs_balance_control *balance_ctl;
>   	wait_queue_head_t balance_wait_q;
>   
> +	/* Cancelation requests for chunk relocation */
> +	atomic_t reloc_cancel_req;
> +
>   	u32 data_chunk_allocations;
>   	u32 metadata_ratio;
>   
> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> index 8c3db9076988..93c994b78d61 100644
> --- a/fs/btrfs/disk-io.c
> +++ b/fs/btrfs/disk-io.c
> @@ -2251,6 +2251,7 @@ static void btrfs_init_balance(struct btrfs_fs_info *fs_info)
>   	atomic_set(&fs_info->balance_cancel_req, 0);
>   	fs_info->balance_ctl = NULL;
>   	init_waitqueue_head(&fs_info->balance_wait_q);
> +	atomic_set(&fs_info->reloc_cancel_req, 0);
>   }
>   
>   static void btrfs_init_btree_inode(struct btrfs_fs_info *fs_info)
> diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
> index b70be2ac2e9e..9b84eb86e426 100644
> --- a/fs/btrfs/relocation.c
> +++ b/fs/btrfs/relocation.c
> @@ -2876,11 +2876,12 @@ int setup_extent_mapping(struct inode *inode, u64 start, u64 end,
>   }
>   
>   /*
> - * Allow error injection to test balance cancellation
> + * Allow error injection to test balance/relocation cancellation
>    */
>   noinline int btrfs_should_cancel_balance(struct btrfs_fs_info *fs_info)
>   {
>   	return atomic_read(&fs_info->balance_cancel_req) ||
> +		atomic_read(&fs_info->reloc_cancel_req) ||
>   		fatal_signal_pending(current);
>   }
>   ALLOW_ERROR_INJECTION(btrfs_should_cancel_balance, TRUE);
> @@ -3780,6 +3781,47 @@ struct inode *create_reloc_inode(struct btrfs_fs_info *fs_info,
>   	return inode;
>   }
>   
> +/*
> + * Mark start of chunk relocation that is cancelable. Check if the cancelation
> + * has been requested meanwhile and don't start in that case.
> + *
> + * Return:
> + *   0             success
> + *   -EINPROGRESS  operation is already in progress, that's probably a bug
> + *   -ECANCELED    cancelation request was set before the operation started
> + */
> +static int reloc_chunk_start(struct btrfs_fs_info *fs_info)
> +{
> +	if (test_and_set_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags)) {
> +		/* This should not happen */
> +		btrfs_err(fs_info, "reloc already running, cannot start");
> +		return -EINPROGRESS;
> +	}
> +
> +	if (atomic_read(&fs_info->reloc_cancel_req) > 0) {
> +		btrfs_info(fs_info, "chunk relocation canceled on start");
> +		/*
> +		 * On cancel, clear all requests but let the caller mark
> +		 * the end after cleanup operations.
> +		 */
> +		atomic_set(&fs_info->reloc_cancel_req, 0);
> +		return -ECANCELED;
> +	}
> +	return 0;
> +}
> +
> +/*
> + * Mark end of chunk relocation that is cancelable and wake any waiters.
> + */
> +static void reloc_chunk_end(struct btrfs_fs_info *fs_info)
> +{
> +	/* Requested after start, clear bit first so any waiters can continue */
> +	if (atomic_read(&fs_info->reloc_cancel_req) > 0)
> +		btrfs_info(fs_info, "chunk relocation canceled during operation");
> +	clear_and_wake_up_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags);
> +	atomic_set(&fs_info->reloc_cancel_req, 0);
> +}
> +
>   static struct reloc_control *alloc_reloc_control(struct btrfs_fs_info *fs_info)
>   {
>   	struct reloc_control *rc;
> @@ -3862,6 +3904,12 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
>   		return -ENOMEM;
>   	}
>   
> +	ret = reloc_chunk_start(fs_info);
> +	if (ret < 0) {
> +		err = ret;
> +		goto out_end;
> +	}

This needs a btrfs_put_block_group(bg) so we don't leak the bg.  Thanks,

Josef

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 4/6] btrfs: add wrapper for conditional start of exclusive operation
  2021-05-21 12:06 ` [PATCH 4/6] btrfs: add wrapper for conditional start of exclusive operation David Sterba
@ 2021-05-21 13:29   ` Josef Bacik
  2021-05-21 16:45     ` David Sterba
  0 siblings, 1 reply; 29+ messages in thread
From: Josef Bacik @ 2021-05-21 13:29 UTC (permalink / raw)
  To: David Sterba, linux-btrfs

On 5/21/21 8:06 AM, David Sterba wrote:
> To support optional cancelation of some operations, add helper that will
> wrap all the combinations. In normal mode it's same as
> btrfs_exclop_start, in cancelation mode it checks if it's already
> running and request cancelation and waits until completion.
> 
> The error codes can be returned to to user space and semantics is not
> changed, adding ECANCELED. This should be evaluated as an error and that
> the operation has not completed and the operation should be restarted
> or the filesystem status reviewed.
> 
> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
>   fs/btrfs/ioctl.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 42 insertions(+)
> 
> diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
> index cacd6ee17d8e..c75ccadf23dc 100644
> --- a/fs/btrfs/ioctl.c
> +++ b/fs/btrfs/ioctl.c
> @@ -1600,6 +1600,48 @@ int btrfs_defrag_file(struct inode *inode, struct file *file,
>   	return ret;
>   }
>   
> +/*
> + * Try to start exclusive operation @type or cancel it if it's running.
> + *
> + * Return:
> + *   0        - normal mode, newly claimed op started
> + *  >0        - normal mode, something else is running,
> + *              return BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS to user space
> + * ECANCELED  - cancel mode, successful cancel
> + * ENOTCONN   - cancel mode, operation not running anymore
> + */
> +static int exclop_start_or_cancel_reloc(struct btrfs_fs_info *fs_info,
> +			enum btrfs_exclusive_operation type, bool cancel)
> +{
> +	if (!cancel) {
> +		/* Start normal op */
> +		if (!btrfs_exclop_start(fs_info, type))
> +			return BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS;
> +		/* Exclusive operation is now claimed */
> +		return 0;
> +	}
> +
> +	/* Cancel running op */
> +	if (btrfs_exclop_start_try_lock(fs_info, type)) {
> +		/*
> +		 * This blocks any exclop finish from setting it to NONE, so we
> +		 * request cancelation. Either it runs and we will wait for it,
> +		 * or it has finished and no waiting will happen.
> +		 */
> +		atomic_inc(&fs_info->reloc_cancel_req);
> +		btrfs_exclop_start_unlock(fs_info);
> +
> +		if (test_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags))
> +			wait_on_bit(&fs_info->flags, BTRFS_FS_RELOC_RUNNING,
> +				    TASK_INTERRUPTIBLE);

Do we want to capture the return value here, in case the user hit's ctrl+c we 
can return -EINTR instead so we don't think it succeeded?  Thanks,

Josef

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/6] btrfs: protect exclusive_operation by super_lock
  2021-05-21 12:06 ` [PATCH 1/6] btrfs: protect exclusive_operation by super_lock David Sterba
@ 2021-05-21 13:37   ` Josef Bacik
  0 siblings, 0 replies; 29+ messages in thread
From: Josef Bacik @ 2021-05-21 13:37 UTC (permalink / raw)
  To: David Sterba, linux-btrfs

On 5/21/21 8:06 AM, David Sterba wrote:
> The exclusive operation is now atomically checked and set using bit
> operations. Switch it to protection by spinlock. The super block lock is
> not frequently used and adding a new lock seems like an overkill so it
> should be safe to reuse it.
> 
> The reason to use spinlock is to enhance the locking context so more
> checks can be done, eg. allowing the same exclusive operation enter
> the exclop section and cancel the running one. This will be used for
> resize and device delete.
> 
> Signed-off-by: David Sterba <dsterba@suse.com>

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/6] btrfs: introduce try-lock semantics for exclusive op start
  2021-05-21 12:06 ` [PATCH 3/6] btrfs: introduce try-lock semantics for exclusive op start David Sterba
@ 2021-05-21 13:38   ` Josef Bacik
  2021-05-27  7:43   ` Anand Jain
  1 sibling, 0 replies; 29+ messages in thread
From: Josef Bacik @ 2021-05-21 13:38 UTC (permalink / raw)
  To: David Sterba, linux-btrfs

On 5/21/21 8:06 AM, David Sterba wrote:
> Add try-lock for exclusive operation start to allow callers to do more
> checks. The same operation must already be running. The try-lock and
> unlock must pair and are a substitute for btrfs_exclop_start, thus it
> must also pair with btrfs_exclop_finish to release the exclop context.
> 
> Signed-off-by: David Sterba <dsterba@suse.com>

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/6] btrfs: add cancelation to resize
  2021-05-21 12:06 ` [PATCH 5/6] btrfs: add cancelation to resize David Sterba
@ 2021-05-21 13:38   ` Josef Bacik
  0 siblings, 0 replies; 29+ messages in thread
From: Josef Bacik @ 2021-05-21 13:38 UTC (permalink / raw)
  To: David Sterba, linux-btrfs

On 5/21/21 8:06 AM, David Sterba wrote:
> Accept literal string "cancel" as resize operation and interpret that
> as a request to cancel the running operation. If it's running, wait
> until it finishes current work and return ECANCELED.
> 
> Shrinking resize uses relocation to move the chunks away, use the
> conditional exclusive operation start and cancelation helpers.
> 
> Signed-off-by: David Sterba <dsterba@suse.com>

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 6/6] btrfs: add device delete cancel
  2021-05-21 12:06 ` [PATCH 6/6] btrfs: add device delete cancel David Sterba
@ 2021-05-21 13:38   ` Josef Bacik
  0 siblings, 0 replies; 29+ messages in thread
From: Josef Bacik @ 2021-05-21 13:38 UTC (permalink / raw)
  To: David Sterba, linux-btrfs

On 5/21/21 8:06 AM, David Sterba wrote:
> Accept device name "cancel" as a request to cancel running device
> deletion operation. The string is literal, in case there's a real device
> named "cancel", pass it as full absolute path or as "./cancel"
> 
> This works for v1 and v2 ioctls when the device is specified by name.
> Moving chunks from the device uses relocation, use the conditional
> exclusive operation start and cancelation helpers
> 
> Signed-off-by: David Sterba <dsterba@suse.com>

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 4/6] btrfs: add wrapper for conditional start of exclusive operation
  2021-05-21 13:29   ` Josef Bacik
@ 2021-05-21 16:45     ` David Sterba
  2021-05-26 22:24       ` David Sterba
  0 siblings, 1 reply; 29+ messages in thread
From: David Sterba @ 2021-05-21 16:45 UTC (permalink / raw)
  To: Josef Bacik; +Cc: David Sterba, linux-btrfs

On Fri, May 21, 2021 at 09:29:16AM -0400, Josef Bacik wrote:
> On 5/21/21 8:06 AM, David Sterba wrote:
> > +		/*
> > +		 * This blocks any exclop finish from setting it to NONE, so we
> > +		 * request cancelation. Either it runs and we will wait for it,
> > +		 * or it has finished and no waiting will happen.
> > +		 */
> > +		atomic_inc(&fs_info->reloc_cancel_req);
> > +		btrfs_exclop_start_unlock(fs_info);
> > +
> > +		if (test_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags))
> > +			wait_on_bit(&fs_info->flags, BTRFS_FS_RELOC_RUNNING,
> > +				    TASK_INTERRUPTIBLE);
> 
> Do we want to capture the return value here, in case the user hit's ctrl+c we 
> can return -EINTR instead so we don't think it succeeded?  Thanks,

The cancel request will stay, so only the waiting part won't happen. I'm
not sure if this is worth to distinguish the two states, eg. allow progs
to print a different message.

Maybe a waiting and non-waiting cancel modes would be also useful. As
the interface is string-based we can also add 'status' that would say if
it's running or not. This should cover the usecases, but would be
a bit more complicated in the state transitions.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 4/6] btrfs: add wrapper for conditional start of exclusive operation
  2021-05-21 16:45     ` David Sterba
@ 2021-05-26 22:24       ` David Sterba
  0 siblings, 0 replies; 29+ messages in thread
From: David Sterba @ 2021-05-26 22:24 UTC (permalink / raw)
  To: dsterba, Josef Bacik, David Sterba, linux-btrfs

On Fri, May 21, 2021 at 06:45:51PM +0200, David Sterba wrote:
> On Fri, May 21, 2021 at 09:29:16AM -0400, Josef Bacik wrote:
> > On 5/21/21 8:06 AM, David Sterba wrote:
> > > +		/*
> > > +		 * This blocks any exclop finish from setting it to NONE, so we
> > > +		 * request cancelation. Either it runs and we will wait for it,
> > > +		 * or it has finished and no waiting will happen.
> > > +		 */
> > > +		atomic_inc(&fs_info->reloc_cancel_req);
> > > +		btrfs_exclop_start_unlock(fs_info);
> > > +
> > > +		if (test_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags))
> > > +			wait_on_bit(&fs_info->flags, BTRFS_FS_RELOC_RUNNING,
> > > +				    TASK_INTERRUPTIBLE);
> > 
> > Do we want to capture the return value here, in case the user hit's ctrl+c we 
> > can return -EINTR instead so we don't think it succeeded?  Thanks,
> 
> The cancel request will stay, so only the waiting part won't happen. I'm
> not sure if this is worth to distinguish the two states, eg. allow progs
> to print a different message.

So as the cancelling would happen I hope it's ok to return the ECANCELED
error in all cases. Other operations like balance or scrub aren't
interruptible and wait until the condition is satisified, but there's a
different pattern regarding the cancel request so it has to be that way
there. This could be unified but right now I don't see the need for
that.

> Maybe a waiting and non-waiting cancel modes would be also useful. As
> the interface is string-based we can also add 'status' that would say if
> it's running or not. This should cover the usecases, but would be
> a bit more complicated in the state transitions.

I've asked on IRC what's the expected behaviour of cancel command
regarding waiting/not waiting. Seems that 'wait until it's finished' is
preferred and it's consistent with what scrub and balance (cancel) do.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] btrfs: add cancelable chunk relocation support
  2021-05-21 13:21   ` Josef Bacik
@ 2021-05-26 22:56     ` David Sterba
  0 siblings, 0 replies; 29+ messages in thread
From: David Sterba @ 2021-05-26 22:56 UTC (permalink / raw)
  To: Josef Bacik; +Cc: David Sterba, linux-btrfs

On Fri, May 21, 2021 at 09:21:29AM -0400, Josef Bacik wrote:
> On 5/21/21 8:06 AM, David Sterba wrote:
> > Add support code that will allow canceling relocation on the chunk
> > granularity. This is different and independent of balance, that also
> > uses relocation but is a higher level operation and manages it's own
> > state and pause/cancelation requests.
> > 
> > Relocation is used for resize (shrink) and device deletion so this will
> > be a common point to implement cancelation for both. The context is
> > entirely in btrfs_relocate_block_group and btrfs_recover_relocation,
> > enclosing one chunk relocation. The status bit is set and unset between
> > the chunks. As relocation can take long, the effects may not be
> > immediate and the request and actual action can slightly race.
> > 
> > The fs_info::reloc_cancel_req is only supposed to be increased and does
> > not pair with decrement like fs_info::balance_cancel_req.
> > 
> > Signed-off-by: David Sterba <dsterba@suse.com>
> > ---
> >   fs/btrfs/ctree.h      |  9 +++++++
> >   fs/btrfs/disk-io.c    |  1 +
> >   fs/btrfs/relocation.c | 60 ++++++++++++++++++++++++++++++++++++++++++-
> >   3 files changed, 69 insertions(+), 1 deletion(-)
> > 
> > diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
> > index a142e56b6b9a..3dfc32a3ebab 100644
> > --- a/fs/btrfs/ctree.h
> > +++ b/fs/btrfs/ctree.h
> > @@ -565,6 +565,12 @@ enum {
> >   	 */
> >   	BTRFS_FS_BALANCE_RUNNING,
> >   
> > +	/*
> > +	 * Indicate that relocation of a chunk has started, it's set per chunk
> > +	 * and is toggled between chunks.
> > +	 */
> > +	BTRFS_FS_RELOC_RUNNING,
> > +
> >   	/* Indicate that the cleaner thread is awake and doing something. */
> >   	BTRFS_FS_CLEANER_RUNNING,
> >   
> > @@ -871,6 +877,9 @@ struct btrfs_fs_info {
> >   	struct btrfs_balance_control *balance_ctl;
> >   	wait_queue_head_t balance_wait_q;
> >   
> > +	/* Cancelation requests for chunk relocation */
> > +	atomic_t reloc_cancel_req;
> > +
> >   	u32 data_chunk_allocations;
> >   	u32 metadata_ratio;
> >   
> > diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> > index 8c3db9076988..93c994b78d61 100644
> > --- a/fs/btrfs/disk-io.c
> > +++ b/fs/btrfs/disk-io.c
> > @@ -2251,6 +2251,7 @@ static void btrfs_init_balance(struct btrfs_fs_info *fs_info)
> >   	atomic_set(&fs_info->balance_cancel_req, 0);
> >   	fs_info->balance_ctl = NULL;
> >   	init_waitqueue_head(&fs_info->balance_wait_q);
> > +	atomic_set(&fs_info->reloc_cancel_req, 0);
> >   }
> >   
> >   static void btrfs_init_btree_inode(struct btrfs_fs_info *fs_info)
> > diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
> > index b70be2ac2e9e..9b84eb86e426 100644
> > --- a/fs/btrfs/relocation.c
> > +++ b/fs/btrfs/relocation.c
> > @@ -2876,11 +2876,12 @@ int setup_extent_mapping(struct inode *inode, u64 start, u64 end,
> >   }
> >   
> >   /*
> > - * Allow error injection to test balance cancellation
> > + * Allow error injection to test balance/relocation cancellation
> >    */
> >   noinline int btrfs_should_cancel_balance(struct btrfs_fs_info *fs_info)
> >   {
> >   	return atomic_read(&fs_info->balance_cancel_req) ||
> > +		atomic_read(&fs_info->reloc_cancel_req) ||
> >   		fatal_signal_pending(current);
> >   }
> >   ALLOW_ERROR_INJECTION(btrfs_should_cancel_balance, TRUE);
> > @@ -3780,6 +3781,47 @@ struct inode *create_reloc_inode(struct btrfs_fs_info *fs_info,
> >   	return inode;
> >   }
> >   
> > +/*
> > + * Mark start of chunk relocation that is cancelable. Check if the cancelation
> > + * has been requested meanwhile and don't start in that case.
> > + *
> > + * Return:
> > + *   0             success
> > + *   -EINPROGRESS  operation is already in progress, that's probably a bug
> > + *   -ECANCELED    cancelation request was set before the operation started
> > + */
> > +static int reloc_chunk_start(struct btrfs_fs_info *fs_info)
> > +{
> > +	if (test_and_set_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags)) {
> > +		/* This should not happen */
> > +		btrfs_err(fs_info, "reloc already running, cannot start");
> > +		return -EINPROGRESS;
> > +	}
> > +
> > +	if (atomic_read(&fs_info->reloc_cancel_req) > 0) {
> > +		btrfs_info(fs_info, "chunk relocation canceled on start");
> > +		/*
> > +		 * On cancel, clear all requests but let the caller mark
> > +		 * the end after cleanup operations.
> > +		 */
> > +		atomic_set(&fs_info->reloc_cancel_req, 0);
> > +		return -ECANCELED;
> > +	}
> > +	return 0;
> > +}
> > +
> > +/*
> > + * Mark end of chunk relocation that is cancelable and wake any waiters.
> > + */
> > +static void reloc_chunk_end(struct btrfs_fs_info *fs_info)
> > +{
> > +	/* Requested after start, clear bit first so any waiters can continue */
> > +	if (atomic_read(&fs_info->reloc_cancel_req) > 0)
> > +		btrfs_info(fs_info, "chunk relocation canceled during operation");
> > +	clear_and_wake_up_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags);
> > +	atomic_set(&fs_info->reloc_cancel_req, 0);
> > +}
> > +
> >   static struct reloc_control *alloc_reloc_control(struct btrfs_fs_info *fs_info)
> >   {
> >   	struct reloc_control *rc;
> > @@ -3862,6 +3904,12 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
> >   		return -ENOMEM;
> >   	}
> >   
> > +	ret = reloc_chunk_start(fs_info);
> > +	if (ret < 0) {
> > +		err = ret;
> > +		goto out_end;
> > +	}
> 
> This needs a btrfs_put_block_group(bg) so we don't leak the bg.  Thanks,

Indeed, thanks, As there are no other remaining things to fix I won't
repost the whole series just for this. I've fixed it in misc-next, where
the exit block looks like

@@ -3952,7 +4000,9 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
        if (err && rw)
                btrfs_dec_block_group_ro(rc->block_group);
        iput(rc->data_inode);
+out_put_bg:
        btrfs_put_block_group(rc->block_group);
+       reloc_chunk_end(fs_info);
        free_reloc_control(rc);
        return err;
 }

and the failure jumps to out_put_bg, ie. restoring the ro count.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/6] btrfs: introduce try-lock semantics for exclusive op start
  2021-05-21 12:06 ` [PATCH 3/6] btrfs: introduce try-lock semantics for exclusive op start David Sterba
  2021-05-21 13:38   ` Josef Bacik
@ 2021-05-27  7:43   ` Anand Jain
  2021-05-28 12:30     ` David Sterba
  1 sibling, 1 reply; 29+ messages in thread
From: Anand Jain @ 2021-05-27  7:43 UTC (permalink / raw)
  To: David Sterba; +Cc: linux-btrfs

On 21/05/2021 20:06, David Sterba wrote:
> Add try-lock for exclusive operation start to allow callers to do more
> checks. The same operation must already be running. The try-lock and
> unlock must pair and are a substitute for btrfs_exclop_start, thus it
> must also pair with btrfs_exclop_finish to release the exclop context.
> 
> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
>   fs/btrfs/ctree.h |  3 +++
>   fs/btrfs/ioctl.c | 26 ++++++++++++++++++++++++++
>   2 files changed, 29 insertions(+)
> 
> diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
> index 3dfc32a3ebab..0dffc06b5ad4 100644
> --- a/fs/btrfs/ctree.h
> +++ b/fs/btrfs/ctree.h
> @@ -3231,6 +3231,9 @@ void btrfs_update_ioctl_balance_args(struct btrfs_fs_info *fs_info,
>   			       struct btrfs_ioctl_balance_args *bargs);
>   bool btrfs_exclop_start(struct btrfs_fs_info *fs_info,
>   			enum btrfs_exclusive_operation type);
> +bool btrfs_exclop_start_try_lock(struct btrfs_fs_info *fs_info,
> +				 enum btrfs_exclusive_operation type);
> +void btrfs_exclop_start_unlock(struct btrfs_fs_info *fs_info);
>   void btrfs_exclop_finish(struct btrfs_fs_info *fs_info);
>   
>   /* file.c */
> diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
> index c4e710ea08ba..cacd6ee17d8e 100644
> --- a/fs/btrfs/ioctl.c
> +++ b/fs/btrfs/ioctl.c
> @@ -371,6 +371,32 @@ bool btrfs_exclop_start(struct btrfs_fs_info *fs_info,
>   	return ret;
>   }
>   
> +/*
> + * Conditionally allow to enter the exclusive operation in case it's compatible
> + * with the running one.  This must be paired with btrfs_exclop_start_unlock and
> + * btrfs_exclop_finish.
> + *
> + * Compatibility:
> + * - the same type is already running
> + * - not BTRFS_EXCLOP_NONE - this is intentionally incompatible and the caller
> + *   must check the condition first that would allow none -> @type
> + */
> +bool btrfs_exclop_start_try_lock(struct btrfs_fs_info *fs_info,
> +				 enum btrfs_exclusive_operation type)
> +{
> +	spin_lock(&fs_info->super_lock);
> +	if (fs_info->exclusive_operation == type)
> +		return true;
> +
> +	spin_unlock(&fs_info->super_lock);
> +	return false;
> +}
> +

  Sorry for the late comment.
  Nit:
  This function implements a conditional lock. But the function name
  misleads to some operation similar to spin_trylock() or
  mutex_trylock(). How about btrfs_exclop_start_cond_lock() instead?
  Otherwise, looks good.

  Reviewed-by: Anand Jain <anand.jain@oracle.com>

Thanks.

> +void btrfs_exclop_start_unlock(struct btrfs_fs_info *fs_info)
> +{
> +	spin_unlock(&fs_info->super_lock);
> +}
> +
>   void btrfs_exclop_finish(struct btrfs_fs_info *fs_info)
>   {
>   	spin_lock(&fs_info->super_lock);
> 


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/6] btrfs: introduce try-lock semantics for exclusive op start
  2021-05-27  7:43   ` Anand Jain
@ 2021-05-28 12:30     ` David Sterba
  2021-05-29 13:48       ` Anand Jain
  0 siblings, 1 reply; 29+ messages in thread
From: David Sterba @ 2021-05-28 12:30 UTC (permalink / raw)
  To: Anand Jain; +Cc: David Sterba, linux-btrfs

On Thu, May 27, 2021 at 03:43:46PM +0800, Anand Jain wrote:
> On 21/05/2021 20:06, David Sterba wrote:
>   Nit:
>   This function implements a conditional lock. But the function name
>   misleads to some operation similar to spin_trylock() or
>   mutex_trylock(). How about btrfs_exclop_start_cond_lock() instead?

The semantics is same as spin_trylock so it's named like it. Try lock is
a conditional lock so I would not want to cause confusion by using
another naming scheme.

https://elixir.bootlin.com/linux/latest/source/include/linux/spinlock_api_smp.h#L86

static inline int __raw_spin_trylock(raw_spinlock_t *lock)
{
	preempt_disable();
	if (do_raw_spin_trylock(lock)) {
		spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
		return 1;
	}
	preempt_enable();
	return 0;
}

bool btrfs_exclop_start_try_lock(struct btrfs_fs_info *fs_info,
                                enum btrfs_exclusive_operation type)
{
       spin_lock(&fs_info->super_lock);
       if (fs_info->exclusive_operation == type)
               return true;

       spin_unlock(&fs_info->super_lock);
       return false;
}

The code flow is the same.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/6] btrfs: introduce try-lock semantics for exclusive op start
  2021-05-28 12:30     ` David Sterba
@ 2021-05-29 13:48       ` Anand Jain
  2021-05-31 18:23         ` David Sterba
  0 siblings, 1 reply; 29+ messages in thread
From: Anand Jain @ 2021-05-29 13:48 UTC (permalink / raw)
  To: dsterba, David Sterba, linux-btrfs

On 28/05/2021 20:30, David Sterba wrote:
> On Thu, May 27, 2021 at 03:43:46PM +0800, Anand Jain wrote:
>> On 21/05/2021 20:06, David Sterba wrote:
>>    Nit:
>>    This function implements a conditional lock. But the function name
>>    misleads to some operation similar to spin_trylock() or
>>    mutex_trylock(). How about btrfs_exclop_start_cond_lock() instead?
> 
> The semantics is same as spin_trylock so it's named like it. Try lock is
> a conditional lock so I would not want to cause confusion by using
> another naming scheme.
> 
> https://elixir.bootlin.com/linux/latest/source/include/linux/spinlock_api_smp.h#L86
> 
> static inline int __raw_spin_trylock(raw_spinlock_t *lock)
> {
> 	preempt_disable();
> 	if (do_raw_spin_trylock(lock)) {
> 		spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
> 		return 1;
> 	}
> 	preempt_enable();
> 	return 0;
> }
> 
> bool btrfs_exclop_start_try_lock(struct btrfs_fs_info *fs_info,
>                                  enum btrfs_exclusive_operation type)
> {
>         spin_lock(&fs_info->super_lock);
>         if (fs_info->exclusive_operation == type)
>                 return true;
> 
>         spin_unlock(&fs_info->super_lock);
>         return false;
> }
> 
> The code flow is the same.
> 

Oh. Ok, now I understand your POV.

I looked up the document to check what it says, and it matched with
my understanding too, as below.

https://www.kernel.org/doc/htmldocs/kernel-locking/trylock-functions.html
-----
::
They can be used if you need no access to the data protected with the 
lock when some other thread is holding the lock.
::
----

Mainly ...trylocks are non-blocking/non-waiting locks.

However, btrfs_exclop_start_try_lock() can be blocking.

But as this is trivial should be ok. Thanks for clarifying.

- Anand

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/6] btrfs: introduce try-lock semantics for exclusive op start
  2021-05-29 13:48       ` Anand Jain
@ 2021-05-31 18:23         ` David Sterba
  0 siblings, 0 replies; 29+ messages in thread
From: David Sterba @ 2021-05-31 18:23 UTC (permalink / raw)
  To: Anand Jain; +Cc: dsterba, David Sterba, linux-btrfs

On Sat, May 29, 2021 at 09:48:46PM +0800, Anand Jain wrote:
> On 28/05/2021 20:30, David Sterba wrote:
> > On Thu, May 27, 2021 at 03:43:46PM +0800, Anand Jain wrote:
> >> On 21/05/2021 20:06, David Sterba wrote:
> > The code flow is the same.
> > 
> 
> Oh. Ok, now I understand your POV.
> 
> I looked up the document to check what it says, and it matched with
> my understanding too, as below.
> 
> https://www.kernel.org/doc/htmldocs/kernel-locking/trylock-functions.html
> -----
> ::
> They can be used if you need no access to the data protected with the 
> lock when some other thread is holding the lock.
> ::
> ----
> 
> Mainly ...trylocks are non-blocking/non-waiting locks.
> 
> However, btrfs_exclop_start_try_lock() can be blocking.

I see, the blocking is there. It should be unnoticeable in practice as
it's locking only a few instructions. The semantics is slighly different
than a plain lock as it needs to check if the exclusive operation is
locked, not the lock itself. If it had to be a true nonblocking trylock,
it would have to check the exclusive_operation value unlocked and either
bail if it's different or recheck it again under lock. This sounds more
complicated than we need.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] btrfs: add cancelable chunk relocation support
  2021-05-21 12:06 ` [PATCH 2/6] btrfs: add cancelable chunk relocation support David Sterba
  2021-05-21 13:21   ` Josef Bacik
@ 2021-06-16 13:54   ` Filipe Manana
  2021-06-16 13:55     ` Filipe Manana
  1 sibling, 1 reply; 29+ messages in thread
From: Filipe Manana @ 2021-06-16 13:54 UTC (permalink / raw)
  To: David Sterba; +Cc: linux-btrfs

On Fri, May 21, 2021 at 9:15 PM David Sterba <dsterba@suse.com> wrote:
>
> Add support code that will allow canceling relocation on the chunk
> granularity. This is different and independent of balance, that also
> uses relocation but is a higher level operation and manages it's own
> state and pause/cancelation requests.
>
> Relocation is used for resize (shrink) and device deletion so this will
> be a common point to implement cancelation for both. The context is
> entirely in btrfs_relocate_block_group and btrfs_recover_relocation,
> enclosing one chunk relocation. The status bit is set and unset between
> the chunks. As relocation can take long, the effects may not be
> immediate and the request and actual action can slightly race.
>
> The fs_info::reloc_cancel_req is only supposed to be increased and does
> not pair with decrement like fs_info::balance_cancel_req.
>
> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
>  fs/btrfs/ctree.h      |  9 +++++++
>  fs/btrfs/disk-io.c    |  1 +
>  fs/btrfs/relocation.c | 60 ++++++++++++++++++++++++++++++++++++++++++-
>  3 files changed, 69 insertions(+), 1 deletion(-)
>
> diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
> index a142e56b6b9a..3dfc32a3ebab 100644
> --- a/fs/btrfs/ctree.h
> +++ b/fs/btrfs/ctree.h
> @@ -565,6 +565,12 @@ enum {
>          */
>         BTRFS_FS_BALANCE_RUNNING,
>
> +       /*
> +        * Indicate that relocation of a chunk has started, it's set per chunk
> +        * and is toggled between chunks.
> +        */
> +       BTRFS_FS_RELOC_RUNNING,
> +
>         /* Indicate that the cleaner thread is awake and doing something. */
>         BTRFS_FS_CLEANER_RUNNING,
>
> @@ -871,6 +877,9 @@ struct btrfs_fs_info {
>         struct btrfs_balance_control *balance_ctl;
>         wait_queue_head_t balance_wait_q;
>
> +       /* Cancelation requests for chunk relocation */
> +       atomic_t reloc_cancel_req;
> +
>         u32 data_chunk_allocations;
>         u32 metadata_ratio;
>
> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> index 8c3db9076988..93c994b78d61 100644
> --- a/fs/btrfs/disk-io.c
> +++ b/fs/btrfs/disk-io.c
> @@ -2251,6 +2251,7 @@ static void btrfs_init_balance(struct btrfs_fs_info *fs_info)
>         atomic_set(&fs_info->balance_cancel_req, 0);
>         fs_info->balance_ctl = NULL;
>         init_waitqueue_head(&fs_info->balance_wait_q);
> +       atomic_set(&fs_info->reloc_cancel_req, 0);
>  }
>
>  static void btrfs_init_btree_inode(struct btrfs_fs_info *fs_info)
> diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
> index b70be2ac2e9e..9b84eb86e426 100644
> --- a/fs/btrfs/relocation.c
> +++ b/fs/btrfs/relocation.c
> @@ -2876,11 +2876,12 @@ int setup_extent_mapping(struct inode *inode, u64 start, u64 end,
>  }
>
>  /*
> - * Allow error injection to test balance cancellation
> + * Allow error injection to test balance/relocation cancellation
>   */
>  noinline int btrfs_should_cancel_balance(struct btrfs_fs_info *fs_info)
>  {
>         return atomic_read(&fs_info->balance_cancel_req) ||
> +               atomic_read(&fs_info->reloc_cancel_req) ||
>                 fatal_signal_pending(current);
>  }
>  ALLOW_ERROR_INJECTION(btrfs_should_cancel_balance, TRUE);
> @@ -3780,6 +3781,47 @@ struct inode *create_reloc_inode(struct btrfs_fs_info *fs_info,
>         return inode;
>  }
>
> +/*
> + * Mark start of chunk relocation that is cancelable. Check if the cancelation
> + * has been requested meanwhile and don't start in that case.
> + *
> + * Return:
> + *   0             success
> + *   -EINPROGRESS  operation is already in progress, that's probably a bug
> + *   -ECANCELED    cancelation request was set before the operation started
> + */
> +static int reloc_chunk_start(struct btrfs_fs_info *fs_info)
> +{
> +       if (test_and_set_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags)) {
> +               /* This should not happen */
> +               btrfs_err(fs_info, "reloc already running, cannot start");
> +               return -EINPROGRESS;
> +       }
> +
> +       if (atomic_read(&fs_info->reloc_cancel_req) > 0) {
> +               btrfs_info(fs_info, "chunk relocation canceled on start");
> +               /*
> +                * On cancel, clear all requests but let the caller mark
> +                * the end after cleanup operations.
> +                */
> +               atomic_set(&fs_info->reloc_cancel_req, 0);
> +               return -ECANCELED;
> +       }
> +       return 0;
> +}
> +
> +/*
> + * Mark end of chunk relocation that is cancelable and wake any waiters.
> + */
> +static void reloc_chunk_end(struct btrfs_fs_info *fs_info)
> +{
> +       /* Requested after start, clear bit first so any waiters can continue */
> +       if (atomic_read(&fs_info->reloc_cancel_req) > 0)
> +               btrfs_info(fs_info, "chunk relocation canceled during operation");
> +       clear_and_wake_up_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags);
> +       atomic_set(&fs_info->reloc_cancel_req, 0);
> +}
> +
>  static struct reloc_control *alloc_reloc_control(struct btrfs_fs_info *fs_info)
>  {
>         struct reloc_control *rc;
> @@ -3862,6 +3904,12 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
>                 return -ENOMEM;
>         }
>
> +       ret = reloc_chunk_start(fs_info);
> +       if (ret < 0) {
> +               err = ret;
> +               goto out_end;

There's a bug here. At out_end we do:

btrfs_put_block_group(rc->block_group);

But rc->block_group was not yet assigned.

Thanks.

> +       }
> +
>         rc->extent_root = extent_root;
>         rc->block_group = bg;
>
> @@ -3953,6 +4001,8 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
>                 btrfs_dec_block_group_ro(rc->block_group);
>         iput(rc->data_inode);
>         btrfs_put_block_group(rc->block_group);
> +out_end:
> +       reloc_chunk_end(fs_info);
>         free_reloc_control(rc);
>         return err;
>  }
> @@ -4073,6 +4123,12 @@ int btrfs_recover_relocation(struct btrfs_root *root)
>                 goto out;
>         }
>
> +       ret = reloc_chunk_start(fs_info);
> +       if (ret < 0) {
> +               err = ret;
> +               goto out_end;
> +       }
> +
>         rc->extent_root = fs_info->extent_root;
>
>         set_reloc_control(rc);
> @@ -4137,6 +4193,8 @@ int btrfs_recover_relocation(struct btrfs_root *root)
>                 err = ret;
>  out_unset:
>         unset_reloc_control(rc);
> +out_end:
> +       reloc_chunk_end(fs_info);
>         free_reloc_control(rc);
>  out:
>         free_reloc_roots(&reloc_roots);
> --
> 2.29.2
>


-- 
Filipe David Manana,

“Whether you think you can, or you think you can't — you're right.”

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] btrfs: add cancelable chunk relocation support
  2021-06-16 13:54   ` Filipe Manana
@ 2021-06-16 13:55     ` Filipe Manana
  2021-06-16 15:53       ` David Sterba
  2021-06-16 15:58       ` [PATCH v2] btrfs: add cancellable " David Sterba
  0 siblings, 2 replies; 29+ messages in thread
From: Filipe Manana @ 2021-06-16 13:55 UTC (permalink / raw)
  To: David Sterba; +Cc: linux-btrfs

On Wed, Jun 16, 2021 at 2:54 PM Filipe Manana <fdmanana@gmail.com> wrote:
>
> On Fri, May 21, 2021 at 9:15 PM David Sterba <dsterba@suse.com> wrote:
> >
> > Add support code that will allow canceling relocation on the chunk
> > granularity. This is different and independent of balance, that also
> > uses relocation but is a higher level operation and manages it's own
> > state and pause/cancelation requests.
> >
> > Relocation is used for resize (shrink) and device deletion so this will
> > be a common point to implement cancelation for both. The context is
> > entirely in btrfs_relocate_block_group and btrfs_recover_relocation,
> > enclosing one chunk relocation. The status bit is set and unset between
> > the chunks. As relocation can take long, the effects may not be
> > immediate and the request and actual action can slightly race.
> >
> > The fs_info::reloc_cancel_req is only supposed to be increased and does
> > not pair with decrement like fs_info::balance_cancel_req.
> >
> > Signed-off-by: David Sterba <dsterba@suse.com>
> > ---
> >  fs/btrfs/ctree.h      |  9 +++++++
> >  fs/btrfs/disk-io.c    |  1 +
> >  fs/btrfs/relocation.c | 60 ++++++++++++++++++++++++++++++++++++++++++-
> >  3 files changed, 69 insertions(+), 1 deletion(-)
> >
> > diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
> > index a142e56b6b9a..3dfc32a3ebab 100644
> > --- a/fs/btrfs/ctree.h
> > +++ b/fs/btrfs/ctree.h
> > @@ -565,6 +565,12 @@ enum {
> >          */
> >         BTRFS_FS_BALANCE_RUNNING,
> >
> > +       /*
> > +        * Indicate that relocation of a chunk has started, it's set per chunk
> > +        * and is toggled between chunks.
> > +        */
> > +       BTRFS_FS_RELOC_RUNNING,
> > +
> >         /* Indicate that the cleaner thread is awake and doing something. */
> >         BTRFS_FS_CLEANER_RUNNING,
> >
> > @@ -871,6 +877,9 @@ struct btrfs_fs_info {
> >         struct btrfs_balance_control *balance_ctl;
> >         wait_queue_head_t balance_wait_q;
> >
> > +       /* Cancelation requests for chunk relocation */
> > +       atomic_t reloc_cancel_req;
> > +
> >         u32 data_chunk_allocations;
> >         u32 metadata_ratio;
> >
> > diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> > index 8c3db9076988..93c994b78d61 100644
> > --- a/fs/btrfs/disk-io.c
> > +++ b/fs/btrfs/disk-io.c
> > @@ -2251,6 +2251,7 @@ static void btrfs_init_balance(struct btrfs_fs_info *fs_info)
> >         atomic_set(&fs_info->balance_cancel_req, 0);
> >         fs_info->balance_ctl = NULL;
> >         init_waitqueue_head(&fs_info->balance_wait_q);
> > +       atomic_set(&fs_info->reloc_cancel_req, 0);
> >  }
> >
> >  static void btrfs_init_btree_inode(struct btrfs_fs_info *fs_info)
> > diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
> > index b70be2ac2e9e..9b84eb86e426 100644
> > --- a/fs/btrfs/relocation.c
> > +++ b/fs/btrfs/relocation.c
> > @@ -2876,11 +2876,12 @@ int setup_extent_mapping(struct inode *inode, u64 start, u64 end,
> >  }
> >
> >  /*
> > - * Allow error injection to test balance cancellation
> > + * Allow error injection to test balance/relocation cancellation
> >   */
> >  noinline int btrfs_should_cancel_balance(struct btrfs_fs_info *fs_info)
> >  {
> >         return atomic_read(&fs_info->balance_cancel_req) ||
> > +               atomic_read(&fs_info->reloc_cancel_req) ||
> >                 fatal_signal_pending(current);
> >  }
> >  ALLOW_ERROR_INJECTION(btrfs_should_cancel_balance, TRUE);
> > @@ -3780,6 +3781,47 @@ struct inode *create_reloc_inode(struct btrfs_fs_info *fs_info,
> >         return inode;
> >  }
> >
> > +/*
> > + * Mark start of chunk relocation that is cancelable. Check if the cancelation
> > + * has been requested meanwhile and don't start in that case.
> > + *
> > + * Return:
> > + *   0             success
> > + *   -EINPROGRESS  operation is already in progress, that's probably a bug
> > + *   -ECANCELED    cancelation request was set before the operation started
> > + */
> > +static int reloc_chunk_start(struct btrfs_fs_info *fs_info)
> > +{
> > +       if (test_and_set_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags)) {
> > +               /* This should not happen */
> > +               btrfs_err(fs_info, "reloc already running, cannot start");
> > +               return -EINPROGRESS;
> > +       }
> > +
> > +       if (atomic_read(&fs_info->reloc_cancel_req) > 0) {
> > +               btrfs_info(fs_info, "chunk relocation canceled on start");
> > +               /*
> > +                * On cancel, clear all requests but let the caller mark
> > +                * the end after cleanup operations.
> > +                */
> > +               atomic_set(&fs_info->reloc_cancel_req, 0);
> > +               return -ECANCELED;
> > +       }
> > +       return 0;
> > +}
> > +
> > +/*
> > + * Mark end of chunk relocation that is cancelable and wake any waiters.
> > + */
> > +static void reloc_chunk_end(struct btrfs_fs_info *fs_info)
> > +{
> > +       /* Requested after start, clear bit first so any waiters can continue */
> > +       if (atomic_read(&fs_info->reloc_cancel_req) > 0)
> > +               btrfs_info(fs_info, "chunk relocation canceled during operation");
> > +       clear_and_wake_up_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags);
> > +       atomic_set(&fs_info->reloc_cancel_req, 0);
> > +}
> > +
> >  static struct reloc_control *alloc_reloc_control(struct btrfs_fs_info *fs_info)
> >  {
> >         struct reloc_control *rc;
> > @@ -3862,6 +3904,12 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
> >                 return -ENOMEM;
> >         }
> >
> > +       ret = reloc_chunk_start(fs_info);
> > +       if (ret < 0) {
> > +               err = ret;
> > +               goto out_end;
>
> There's a bug here. At out_end we do:
>
> btrfs_put_block_group(rc->block_group);
>
> But rc->block_group was not yet assigned.

On misc-next it's actually label "out_put_bg" and under there we do
the block group put, but not on the version posted here.

>
> Thanks.
>
> > +       }
> > +
> >         rc->extent_root = extent_root;
> >         rc->block_group = bg;
> >
> > @@ -3953,6 +4001,8 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
> >                 btrfs_dec_block_group_ro(rc->block_group);
> >         iput(rc->data_inode);
> >         btrfs_put_block_group(rc->block_group);
> > +out_end:
> > +       reloc_chunk_end(fs_info);
> >         free_reloc_control(rc);
> >         return err;
> >  }
> > @@ -4073,6 +4123,12 @@ int btrfs_recover_relocation(struct btrfs_root *root)
> >                 goto out;
> >         }
> >
> > +       ret = reloc_chunk_start(fs_info);
> > +       if (ret < 0) {
> > +               err = ret;
> > +               goto out_end;
> > +       }
> > +
> >         rc->extent_root = fs_info->extent_root;
> >
> >         set_reloc_control(rc);
> > @@ -4137,6 +4193,8 @@ int btrfs_recover_relocation(struct btrfs_root *root)
> >                 err = ret;
> >  out_unset:
> >         unset_reloc_control(rc);
> > +out_end:
> > +       reloc_chunk_end(fs_info);
> >         free_reloc_control(rc);
> >  out:
> >         free_reloc_roots(&reloc_roots);
> > --
> > 2.29.2
> >
>
>
> --
> Filipe David Manana,
>
> “Whether you think you can, or you think you can't — you're right.”



-- 
Filipe David Manana,

“Whether you think you can, or you think you can't — you're right.”

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] btrfs: add cancelable chunk relocation support
  2021-06-16 13:55     ` Filipe Manana
@ 2021-06-16 15:53       ` David Sterba
  2021-06-16 15:58       ` [PATCH v2] btrfs: add cancellable " David Sterba
  1 sibling, 0 replies; 29+ messages in thread
From: David Sterba @ 2021-06-16 15:53 UTC (permalink / raw)
  To: Filipe Manana; +Cc: David Sterba, linux-btrfs

On Wed, Jun 16, 2021 at 02:55:46PM +0100, Filipe Manana wrote:
> On Wed, Jun 16, 2021 at 2:54 PM Filipe Manana <fdmanana@gmail.com> wrote:
> >
> > On Fri, May 21, 2021 at 9:15 PM David Sterba <dsterba@suse.com> wrote:
> > >
> > > Add support code that will allow canceling relocation on the chunk
> > > granularity. This is different and independent of balance, that also
> > > uses relocation but is a higher level operation and manages it's own
> > > state and pause/cancelation requests.
> > >
> > > Relocation is used for resize (shrink) and device deletion so this will
> > > be a common point to implement cancelation for both. The context is
> > > entirely in btrfs_relocate_block_group and btrfs_recover_relocation,
> > > enclosing one chunk relocation. The status bit is set and unset between
> > > the chunks. As relocation can take long, the effects may not be
> > > immediate and the request and actual action can slightly race.
> > >
> > > The fs_info::reloc_cancel_req is only supposed to be increased and does
> > > not pair with decrement like fs_info::balance_cancel_req.
> > >
> > > Signed-off-by: David Sterba <dsterba@suse.com>
> > > ---
> > >  fs/btrfs/ctree.h      |  9 +++++++
> > >  fs/btrfs/disk-io.c    |  1 +
> > >  fs/btrfs/relocation.c | 60 ++++++++++++++++++++++++++++++++++++++++++-
> > >  3 files changed, 69 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
> > > index a142e56b6b9a..3dfc32a3ebab 100644
> > > --- a/fs/btrfs/ctree.h
> > > +++ b/fs/btrfs/ctree.h
> > > @@ -565,6 +565,12 @@ enum {
> > >          */
> > >         BTRFS_FS_BALANCE_RUNNING,
> > >
> > > +       /*
> > > +        * Indicate that relocation of a chunk has started, it's set per chunk
> > > +        * and is toggled between chunks.
> > > +        */
> > > +       BTRFS_FS_RELOC_RUNNING,
> > > +
> > >         /* Indicate that the cleaner thread is awake and doing something. */
> > >         BTRFS_FS_CLEANER_RUNNING,
> > >
> > > @@ -871,6 +877,9 @@ struct btrfs_fs_info {
> > >         struct btrfs_balance_control *balance_ctl;
> > >         wait_queue_head_t balance_wait_q;
> > >
> > > +       /* Cancelation requests for chunk relocation */
> > > +       atomic_t reloc_cancel_req;
> > > +
> > >         u32 data_chunk_allocations;
> > >         u32 metadata_ratio;
> > >
> > > diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> > > index 8c3db9076988..93c994b78d61 100644
> > > --- a/fs/btrfs/disk-io.c
> > > +++ b/fs/btrfs/disk-io.c
> > > @@ -2251,6 +2251,7 @@ static void btrfs_init_balance(struct btrfs_fs_info *fs_info)
> > >         atomic_set(&fs_info->balance_cancel_req, 0);
> > >         fs_info->balance_ctl = NULL;
> > >         init_waitqueue_head(&fs_info->balance_wait_q);
> > > +       atomic_set(&fs_info->reloc_cancel_req, 0);
> > >  }
> > >
> > >  static void btrfs_init_btree_inode(struct btrfs_fs_info *fs_info)
> > > diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
> > > index b70be2ac2e9e..9b84eb86e426 100644
> > > --- a/fs/btrfs/relocation.c
> > > +++ b/fs/btrfs/relocation.c
> > > @@ -2876,11 +2876,12 @@ int setup_extent_mapping(struct inode *inode, u64 start, u64 end,
> > >  }
> > >
> > >  /*
> > > - * Allow error injection to test balance cancellation
> > > + * Allow error injection to test balance/relocation cancellation
> > >   */
> > >  noinline int btrfs_should_cancel_balance(struct btrfs_fs_info *fs_info)
> > >  {
> > >         return atomic_read(&fs_info->balance_cancel_req) ||
> > > +               atomic_read(&fs_info->reloc_cancel_req) ||
> > >                 fatal_signal_pending(current);
> > >  }
> > >  ALLOW_ERROR_INJECTION(btrfs_should_cancel_balance, TRUE);
> > > @@ -3780,6 +3781,47 @@ struct inode *create_reloc_inode(struct btrfs_fs_info *fs_info,
> > >         return inode;
> > >  }
> > >
> > > +/*
> > > + * Mark start of chunk relocation that is cancelable. Check if the cancelation
> > > + * has been requested meanwhile and don't start in that case.
> > > + *
> > > + * Return:
> > > + *   0             success
> > > + *   -EINPROGRESS  operation is already in progress, that's probably a bug
> > > + *   -ECANCELED    cancelation request was set before the operation started
> > > + */
> > > +static int reloc_chunk_start(struct btrfs_fs_info *fs_info)
> > > +{
> > > +       if (test_and_set_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags)) {
> > > +               /* This should not happen */
> > > +               btrfs_err(fs_info, "reloc already running, cannot start");
> > > +               return -EINPROGRESS;
> > > +       }
> > > +
> > > +       if (atomic_read(&fs_info->reloc_cancel_req) > 0) {
> > > +               btrfs_info(fs_info, "chunk relocation canceled on start");
> > > +               /*
> > > +                * On cancel, clear all requests but let the caller mark
> > > +                * the end after cleanup operations.
> > > +                */
> > > +               atomic_set(&fs_info->reloc_cancel_req, 0);
> > > +               return -ECANCELED;
> > > +       }
> > > +       return 0;
> > > +}
> > > +
> > > +/*
> > > + * Mark end of chunk relocation that is cancelable and wake any waiters.
> > > + */
> > > +static void reloc_chunk_end(struct btrfs_fs_info *fs_info)
> > > +{
> > > +       /* Requested after start, clear bit first so any waiters can continue */
> > > +       if (atomic_read(&fs_info->reloc_cancel_req) > 0)
> > > +               btrfs_info(fs_info, "chunk relocation canceled during operation");
> > > +       clear_and_wake_up_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags);
> > > +       atomic_set(&fs_info->reloc_cancel_req, 0);
> > > +}
> > > +
> > >  static struct reloc_control *alloc_reloc_control(struct btrfs_fs_info *fs_info)
> > >  {
> > >         struct reloc_control *rc;
> > > @@ -3862,6 +3904,12 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
> > >                 return -ENOMEM;
> > >         }
> > >
> > > +       ret = reloc_chunk_start(fs_info);
> > > +       if (ret < 0) {
> > > +               err = ret;
> > > +               goto out_end;
> >
> > There's a bug here. At out_end we do:
> >
> > btrfs_put_block_group(rc->block_group);
> >
> > But rc->block_group was not yet assigned.
> 
> On misc-next it's actually label "out_put_bg" and under there we do
> the block group put, but not on the version posted here.

Right, thanks for catching it. It should have been
btrfs_put_block_group(bg).

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH v2] btrfs: add cancellable chunk relocation support
  2021-06-16 13:55     ` Filipe Manana
  2021-06-16 15:53       ` David Sterba
@ 2021-06-16 15:58       ` David Sterba
  2021-06-17  9:18         ` Filipe Manana
  1 sibling, 1 reply; 29+ messages in thread
From: David Sterba @ 2021-06-16 15:58 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba, fdmanana

Add support code that will allow canceling relocation on the chunk
granularity. This is different and independent of balance, that also
uses relocation but is a higher level operation and manages it's own
state and pause/cancellation requests.

Relocation is used for resize (shrink) and device deletion so this will
be a common point to implement cancellation for both. The context is
entirely in btrfs_relocate_block_group and btrfs_recover_relocation,
enclosing one chunk relocation. The status bit is set and unset between
the chunks. As relocation can take long, the effects may not be
immediate and the request and actual action can slightly race.

The fs_info::reloc_cancel_req is only supposed to be increased and does
not pair with decrement like fs_info::balance_cancel_req.

Signed-off-by: David Sterba <dsterba@suse.com>
---

v2:
- put the blockgroup by the original pointer 'bg' in
  btrfs_relocate_block_group

 fs/btrfs/ctree.h      |  9 +++++++
 fs/btrfs/disk-io.c    |  1 +
 fs/btrfs/relocation.c | 62 +++++++++++++++++++++++++++++++++++++++++--
 3 files changed, 70 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 4049f533e35e..b7c36aad45ef 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -565,6 +565,12 @@ enum {
 	 */
 	BTRFS_FS_BALANCE_RUNNING,
 
+	/*
+	 * Indicate that relocation of a chunk has started, it's set per chunk
+	 * and is toggled between chunks.
+	 */
+	BTRFS_FS_RELOC_RUNNING,
+
 	/* Indicate that the cleaner thread is awake and doing something. */
 	BTRFS_FS_CLEANER_RUNNING,
 
@@ -871,6 +877,9 @@ struct btrfs_fs_info {
 	struct btrfs_balance_control *balance_ctl;
 	wait_queue_head_t balance_wait_q;
 
+	/* Cancellation requests for chunk relocation */
+	atomic_t reloc_cancel_req;
+
 	u32 data_chunk_allocations;
 	u32 metadata_ratio;
 
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 34bcd986f738..d1d5091a8385 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -2251,6 +2251,7 @@ static void btrfs_init_balance(struct btrfs_fs_info *fs_info)
 	atomic_set(&fs_info->balance_cancel_req, 0);
 	fs_info->balance_ctl = NULL;
 	init_waitqueue_head(&fs_info->balance_wait_q);
+	atomic_set(&fs_info->reloc_cancel_req, 0);
 }
 
 static void btrfs_init_btree_inode(struct btrfs_fs_info *fs_info)
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index b70be2ac2e9e..420a89869889 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -2876,11 +2876,12 @@ int setup_extent_mapping(struct inode *inode, u64 start, u64 end,
 }
 
 /*
- * Allow error injection to test balance cancellation
+ * Allow error injection to test balance/relocation cancellation
  */
 noinline int btrfs_should_cancel_balance(struct btrfs_fs_info *fs_info)
 {
 	return atomic_read(&fs_info->balance_cancel_req) ||
+		atomic_read(&fs_info->reloc_cancel_req) ||
 		fatal_signal_pending(current);
 }
 ALLOW_ERROR_INJECTION(btrfs_should_cancel_balance, TRUE);
@@ -3780,6 +3781,47 @@ struct inode *create_reloc_inode(struct btrfs_fs_info *fs_info,
 	return inode;
 }
 
+/*
+ * Mark start of chunk relocation that is cancellable. Check if the cancellation
+ * has been requested meanwhile and don't start in that case.
+ *
+ * Return:
+ *   0             success
+ *   -EINPROGRESS  operation is already in progress, that's probably a bug
+ *   -ECANCELED    cancellation request was set before the operation started
+ */
+static int reloc_chunk_start(struct btrfs_fs_info *fs_info)
+{
+	if (test_and_set_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags)) {
+		/* This should not happen */
+		btrfs_err(fs_info, "reloc already running, cannot start");
+		return -EINPROGRESS;
+	}
+
+	if (atomic_read(&fs_info->reloc_cancel_req) > 0) {
+		btrfs_info(fs_info, "chunk relocation canceled on start");
+		/*
+		 * On cancel, clear all requests but let the caller mark
+		 * the end after cleanup operations.
+		 */
+		atomic_set(&fs_info->reloc_cancel_req, 0);
+		return -ECANCELED;
+	}
+	return 0;
+}
+
+/*
+ * Mark end of chunk relocation that is cancellable and wake any waiters.
+ */
+static void reloc_chunk_end(struct btrfs_fs_info *fs_info)
+{
+	/* Requested after start, clear bit first so any waiters can continue */
+	if (atomic_read(&fs_info->reloc_cancel_req) > 0)
+		btrfs_info(fs_info, "chunk relocation canceled during operation");
+	clear_and_wake_up_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags);
+	atomic_set(&fs_info->reloc_cancel_req, 0);
+}
+
 static struct reloc_control *alloc_reloc_control(struct btrfs_fs_info *fs_info)
 {
 	struct reloc_control *rc;
@@ -3862,6 +3904,12 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
 		return -ENOMEM;
 	}
 
+	ret = reloc_chunk_start(fs_info);
+	if (ret < 0) {
+		err = ret;
+		goto out_put_bg;
+	}
+
 	rc->extent_root = extent_root;
 	rc->block_group = bg;
 
@@ -3952,7 +4000,9 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
 	if (err && rw)
 		btrfs_dec_block_group_ro(rc->block_group);
 	iput(rc->data_inode);
-	btrfs_put_block_group(rc->block_group);
+out_put_bg:
+	btrfs_put_block_group(bg);
+	reloc_chunk_end(fs_info);
 	free_reloc_control(rc);
 	return err;
 }
@@ -4073,6 +4123,12 @@ int btrfs_recover_relocation(struct btrfs_root *root)
 		goto out;
 	}
 
+	ret = reloc_chunk_start(fs_info);
+	if (ret < 0) {
+		err = ret;
+		goto out_end;
+	}
+
 	rc->extent_root = fs_info->extent_root;
 
 	set_reloc_control(rc);
@@ -4137,6 +4193,8 @@ int btrfs_recover_relocation(struct btrfs_root *root)
 		err = ret;
 out_unset:
 	unset_reloc_control(rc);
+out_end:
+	reloc_chunk_end(fs_info);
 	free_reloc_control(rc);
 out:
 	free_reloc_roots(&reloc_roots);
-- 
2.31.1


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v2] btrfs: add cancellable chunk relocation support
  2021-06-16 15:58       ` [PATCH v2] btrfs: add cancellable " David Sterba
@ 2021-06-17  9:18         ` Filipe Manana
  0 siblings, 0 replies; 29+ messages in thread
From: Filipe Manana @ 2021-06-17  9:18 UTC (permalink / raw)
  To: David Sterba; +Cc: linux-btrfs

On Wed, Jun 16, 2021 at 5:01 PM David Sterba <dsterba@suse.com> wrote:
>
> Add support code that will allow canceling relocation on the chunk
> granularity. This is different and independent of balance, that also
> uses relocation but is a higher level operation and manages it's own
> state and pause/cancellation requests.
>
> Relocation is used for resize (shrink) and device deletion so this will
> be a common point to implement cancellation for both. The context is
> entirely in btrfs_relocate_block_group and btrfs_recover_relocation,
> enclosing one chunk relocation. The status bit is set and unset between
> the chunks. As relocation can take long, the effects may not be
> immediate and the request and actual action can slightly race.
>
> The fs_info::reloc_cancel_req is only supposed to be increased and does
> not pair with decrement like fs_info::balance_cancel_req.
>
> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
>
> v2:
> - put the blockgroup by the original pointer 'bg' in
>   btrfs_relocate_block_group

Reviewed-by: Filipe Manana <fdmanana@suse.com>

Looks good now, thanks.

>
>  fs/btrfs/ctree.h      |  9 +++++++
>  fs/btrfs/disk-io.c    |  1 +
>  fs/btrfs/relocation.c | 62 +++++++++++++++++++++++++++++++++++++++++--
>  3 files changed, 70 insertions(+), 2 deletions(-)
>
> diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
> index 4049f533e35e..b7c36aad45ef 100644
> --- a/fs/btrfs/ctree.h
> +++ b/fs/btrfs/ctree.h
> @@ -565,6 +565,12 @@ enum {
>          */
>         BTRFS_FS_BALANCE_RUNNING,
>
> +       /*
> +        * Indicate that relocation of a chunk has started, it's set per chunk
> +        * and is toggled between chunks.
> +        */
> +       BTRFS_FS_RELOC_RUNNING,
> +
>         /* Indicate that the cleaner thread is awake and doing something. */
>         BTRFS_FS_CLEANER_RUNNING,
>
> @@ -871,6 +877,9 @@ struct btrfs_fs_info {
>         struct btrfs_balance_control *balance_ctl;
>         wait_queue_head_t balance_wait_q;
>
> +       /* Cancellation requests for chunk relocation */
> +       atomic_t reloc_cancel_req;
> +
>         u32 data_chunk_allocations;
>         u32 metadata_ratio;
>
> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> index 34bcd986f738..d1d5091a8385 100644
> --- a/fs/btrfs/disk-io.c
> +++ b/fs/btrfs/disk-io.c
> @@ -2251,6 +2251,7 @@ static void btrfs_init_balance(struct btrfs_fs_info *fs_info)
>         atomic_set(&fs_info->balance_cancel_req, 0);
>         fs_info->balance_ctl = NULL;
>         init_waitqueue_head(&fs_info->balance_wait_q);
> +       atomic_set(&fs_info->reloc_cancel_req, 0);
>  }
>
>  static void btrfs_init_btree_inode(struct btrfs_fs_info *fs_info)
> diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
> index b70be2ac2e9e..420a89869889 100644
> --- a/fs/btrfs/relocation.c
> +++ b/fs/btrfs/relocation.c
> @@ -2876,11 +2876,12 @@ int setup_extent_mapping(struct inode *inode, u64 start, u64 end,
>  }
>
>  /*
> - * Allow error injection to test balance cancellation
> + * Allow error injection to test balance/relocation cancellation
>   */
>  noinline int btrfs_should_cancel_balance(struct btrfs_fs_info *fs_info)
>  {
>         return atomic_read(&fs_info->balance_cancel_req) ||
> +               atomic_read(&fs_info->reloc_cancel_req) ||
>                 fatal_signal_pending(current);
>  }
>  ALLOW_ERROR_INJECTION(btrfs_should_cancel_balance, TRUE);
> @@ -3780,6 +3781,47 @@ struct inode *create_reloc_inode(struct btrfs_fs_info *fs_info,
>         return inode;
>  }
>
> +/*
> + * Mark start of chunk relocation that is cancellable. Check if the cancellation
> + * has been requested meanwhile and don't start in that case.
> + *
> + * Return:
> + *   0             success
> + *   -EINPROGRESS  operation is already in progress, that's probably a bug
> + *   -ECANCELED    cancellation request was set before the operation started
> + */
> +static int reloc_chunk_start(struct btrfs_fs_info *fs_info)
> +{
> +       if (test_and_set_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags)) {
> +               /* This should not happen */
> +               btrfs_err(fs_info, "reloc already running, cannot start");
> +               return -EINPROGRESS;
> +       }
> +
> +       if (atomic_read(&fs_info->reloc_cancel_req) > 0) {
> +               btrfs_info(fs_info, "chunk relocation canceled on start");
> +               /*
> +                * On cancel, clear all requests but let the caller mark
> +                * the end after cleanup operations.
> +                */
> +               atomic_set(&fs_info->reloc_cancel_req, 0);
> +               return -ECANCELED;
> +       }
> +       return 0;
> +}
> +
> +/*
> + * Mark end of chunk relocation that is cancellable and wake any waiters.
> + */
> +static void reloc_chunk_end(struct btrfs_fs_info *fs_info)
> +{
> +       /* Requested after start, clear bit first so any waiters can continue */
> +       if (atomic_read(&fs_info->reloc_cancel_req) > 0)
> +               btrfs_info(fs_info, "chunk relocation canceled during operation");
> +       clear_and_wake_up_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags);
> +       atomic_set(&fs_info->reloc_cancel_req, 0);
> +}
> +
>  static struct reloc_control *alloc_reloc_control(struct btrfs_fs_info *fs_info)
>  {
>         struct reloc_control *rc;
> @@ -3862,6 +3904,12 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
>                 return -ENOMEM;
>         }
>
> +       ret = reloc_chunk_start(fs_info);
> +       if (ret < 0) {
> +               err = ret;
> +               goto out_put_bg;
> +       }
> +
>         rc->extent_root = extent_root;
>         rc->block_group = bg;
>
> @@ -3952,7 +4000,9 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
>         if (err && rw)
>                 btrfs_dec_block_group_ro(rc->block_group);
>         iput(rc->data_inode);
> -       btrfs_put_block_group(rc->block_group);
> +out_put_bg:
> +       btrfs_put_block_group(bg);
> +       reloc_chunk_end(fs_info);
>         free_reloc_control(rc);
>         return err;
>  }
> @@ -4073,6 +4123,12 @@ int btrfs_recover_relocation(struct btrfs_root *root)
>                 goto out;
>         }
>
> +       ret = reloc_chunk_start(fs_info);
> +       if (ret < 0) {
> +               err = ret;
> +               goto out_end;
> +       }
> +
>         rc->extent_root = fs_info->extent_root;
>
>         set_reloc_control(rc);
> @@ -4137,6 +4193,8 @@ int btrfs_recover_relocation(struct btrfs_root *root)
>                 err = ret;
>  out_unset:
>         unset_reloc_control(rc);
> +out_end:
> +       reloc_chunk_end(fs_info);
>         free_reloc_control(rc);
>  out:
>         free_reloc_roots(&reloc_roots);
> --
> 2.31.1
>


-- 
Filipe David Manana,

“Whether you think you can, or you think you can't — you're right.”

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 0/6] Support resize and device delete cancel ops
  2021-05-21 12:06 [PATCH 0/6] Support resize and device delete cancel ops David Sterba
                   ` (7 preceding siblings ...)
  2021-05-21 12:06 ` [PATCH 2/2] btrfs-progs: fi resize: " David Sterba
@ 2021-12-14 14:49 ` Anand Jain
  2021-12-15 15:13   ` David Sterba
  8 siblings, 1 reply; 29+ messages in thread
From: Anand Jain @ 2021-12-14 14:49 UTC (permalink / raw)
  To: David Sterba, linux-btrfs


On 21/05/2021 20:06, David Sterba wrote:
> We don't have a nice interface to cancel the resize or device deletion
> from a command. Since recently, both commands can be interrupted by a
> signal, which also means Ctrl-C from terminal, but given the long
> history of absence of the commands I think this is not yet well known.
> 
> Examples:
> 
>    $ btrfs fi resize -10G /mnt
>    ...
>    $ btrfs fi resize cancel /mnt
> 
>    $ btrfs device delete /dev/sdx /mnt
>    ...
>    $ btrfs device delete cancel /mnt


David,

These cancel commands don't fail with un-supported ioctl error codes on 
the older kernels as we didn't define a new ioctl for this purpose.

Generally, the latest btrfs-progs are compatible with the older kernels, 
especially useful for the fsck fixes.

Moving forward does the newer btrfs-progs version will continue to be 
compatible with the older kernels?

Thanks Anand

> 
> The cancel request returns once the resize/delete command finishes
> processing of the currently relocated chunk. The btrfs-progs needs to be
> updated as well to skip checks of the sysfs exclusive_operation file
> added in 5.10 (raw ioctl would work).
> 
> David Sterba (6):
>    btrfs: protect exclusive_operation by super_lock
>    btrfs: add cancelable chunk relocation support
>    btrfs: introduce try-lock semantics for exclusive op start
>    btrfs: add wrapper for conditional start of exclusive operation
>    btrfs: add cancelation to resize
>    btrfs: add device delete cancel
> 
>   fs/btrfs/ctree.h      |  16 +++-
>   fs/btrfs/disk-io.c    |   1 +
>   fs/btrfs/ioctl.c      | 174 ++++++++++++++++++++++++++++++++----------
>   fs/btrfs/relocation.c |  60 ++++++++++++++-
>   4 files changed, 207 insertions(+), 44 deletions(-)
> 


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 0/6] Support resize and device delete cancel ops
  2021-12-14 14:49 ` [PATCH 0/6] Support resize and device delete cancel ops Anand Jain
@ 2021-12-15 15:13   ` David Sterba
  0 siblings, 0 replies; 29+ messages in thread
From: David Sterba @ 2021-12-15 15:13 UTC (permalink / raw)
  To: Anand Jain; +Cc: David Sterba, linux-btrfs

On Tue, Dec 14, 2021 at 10:49:37PM +0800, Anand Jain wrote:
> 
> On 21/05/2021 20:06, David Sterba wrote:
> > We don't have a nice interface to cancel the resize or device deletion
> > from a command. Since recently, both commands can be interrupted by a
> > signal, which also means Ctrl-C from terminal, but given the long
> > history of absence of the commands I think this is not yet well known.
> > 
> > Examples:
> > 
> >    $ btrfs fi resize -10G /mnt
> >    ...
> >    $ btrfs fi resize cancel /mnt
> > 
> >    $ btrfs device delete /dev/sdx /mnt
> >    ...
> >    $ btrfs device delete cancel /mnt
> 
> 
> David,
> 
> These cancel commands don't fail with un-supported ioctl error codes on 
> the older kernels as we didn't define a new ioctl for this purpose.
> 
> Generally, the latest btrfs-progs are compatible with the older kernels, 
> especially useful for the fsck fixes.
> 
> Moving forward does the newer btrfs-progs version will continue to be 
> compatible with the older kernels?

Yes, the compatibility of old kernel and new (latest) progs is supposed
to be maintained, eg. by adding checks for supported features or if the
ioctls work, version check is the last resort.

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2021-12-15 15:13 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-21 12:06 [PATCH 0/6] Support resize and device delete cancel ops David Sterba
2021-05-21 12:06 ` [PATCH 1/6] btrfs: protect exclusive_operation by super_lock David Sterba
2021-05-21 13:37   ` Josef Bacik
2021-05-21 12:06 ` [PATCH 2/6] btrfs: add cancelable chunk relocation support David Sterba
2021-05-21 13:21   ` Josef Bacik
2021-05-26 22:56     ` David Sterba
2021-06-16 13:54   ` Filipe Manana
2021-06-16 13:55     ` Filipe Manana
2021-06-16 15:53       ` David Sterba
2021-06-16 15:58       ` [PATCH v2] btrfs: add cancellable " David Sterba
2021-06-17  9:18         ` Filipe Manana
2021-05-21 12:06 ` [PATCH 3/6] btrfs: introduce try-lock semantics for exclusive op start David Sterba
2021-05-21 13:38   ` Josef Bacik
2021-05-27  7:43   ` Anand Jain
2021-05-28 12:30     ` David Sterba
2021-05-29 13:48       ` Anand Jain
2021-05-31 18:23         ` David Sterba
2021-05-21 12:06 ` [PATCH 4/6] btrfs: add wrapper for conditional start of exclusive operation David Sterba
2021-05-21 13:29   ` Josef Bacik
2021-05-21 16:45     ` David Sterba
2021-05-26 22:24       ` David Sterba
2021-05-21 12:06 ` [PATCH 5/6] btrfs: add cancelation to resize David Sterba
2021-05-21 13:38   ` Josef Bacik
2021-05-21 12:06 ` [PATCH 6/6] btrfs: add device delete cancel David Sterba
2021-05-21 13:38   ` Josef Bacik
2021-05-21 12:06 ` [PATCH 1/2] btrfs-progs: device remove: add support for cancel David Sterba
2021-05-21 12:06 ` [PATCH 2/2] btrfs-progs: fi resize: " David Sterba
2021-12-14 14:49 ` [PATCH 0/6] Support resize and device delete cancel ops Anand Jain
2021-12-15 15:13   ` David Sterba

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.