All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/3] btrfs: qgroup rescan races (part 1)
@ 2018-05-02 21:11 jeffm
  2018-05-02 21:11 ` [PATCH 1/3] btrfs: qgroups, fix rescan worker running races jeffm
                   ` (4 more replies)
  0 siblings, 5 replies; 34+ messages in thread
From: jeffm @ 2018-05-02 21:11 UTC (permalink / raw)
  To: dsterba, linux-btrfs; +Cc: Jeff Mahoney

From: Jeff Mahoney <jeffm@suse.com>

Hi Dave -

Here's the updated patchset for the rescan races.  This fixes the issue
where we'd try to start multiple workers.  It introduces a new "ready"
bool that we set during initialization and clear while queuing the worker.
The queuer is also now responsible for most of the initialization.

I have a separate patch set start that gets rid of the racy mess surrounding
the rescan worker startup.  We can handle it in btrfs_run_qgroups and
just set a flag to start it everywhere else.

-Jeff

---

Jeff Mahoney (3):
  btrfs: qgroups, fix rescan worker running races
  btrfs: qgroups, remove unnecessary memset before btrfs_init_work
  btrfs: qgroup, don't try to insert status item after ENOMEM in rescan
    worker

 fs/btrfs/async-thread.c |   1 +
 fs/btrfs/ctree.h        |   2 +
 fs/btrfs/qgroup.c       | 100 +++++++++++++++++++++++++++---------------------
 3 files changed, 60 insertions(+), 43 deletions(-)

-- 
2.12.3


^ permalink raw reply	[flat|nested] 34+ messages in thread
* [PATCH 1/3] btrfs: qgroups, fix rescan worker running races
@ 2018-04-26 19:23 jeffm
  2018-04-27  8:42 ` Nikolay Borisov
                   ` (4 more replies)
  0 siblings, 5 replies; 34+ messages in thread
From: jeffm @ 2018-04-26 19:23 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Jeff Mahoney

From: Jeff Mahoney <jeffm@suse.com>

Commit d2c609b834d6 (Btrfs: fix qgroup rescan worker initialization)
fixed the issue with BTRFS_IOC_QUOTA_RESCAN_WAIT being racy, but
ended up reintroducing the hang-on-unmount bug that the commit it
intended to fix addressed.

The race this time is between qgroup_rescan_init setting
->qgroup_rescan_running = true and the worker starting.  There are
many scenarios where we initialize the worker and never start it.  The
completion btrfs_ioctl_quota_rescan_wait waits for will never come.
This can happen even without involving error handling, since mounting
the file system read-only returns between initializing the worker and
queueing it.

The right place to do it is when we're queuing the worker.  The flag
really just means that btrfs_ioctl_quota_rescan_wait should wait for
a completion.

This patch introduces a new helper, queue_rescan_worker, that handles
the ->qgroup_rescan_running flag, including any races with umount.

While we're at it, ->qgroup_rescan_running is protected only by the
->qgroup_rescan_mutex.  btrfs_ioctl_quota_rescan_wait doesn't need
to take the spinlock too.

Fixes: d2c609b834d6 (Btrfs: fix qgroup rescan worker initialization)
Signed-off-by: Jeff Mahoney <jeffm@suse.com>
---
 fs/btrfs/ctree.h  |  1 +
 fs/btrfs/qgroup.c | 40 ++++++++++++++++++++++++++++------------
 2 files changed, 29 insertions(+), 12 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index da308774b8a4..dbba615f4d0f 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -1045,6 +1045,7 @@ struct btrfs_fs_info {
 	struct btrfs_workqueue *qgroup_rescan_workers;
 	struct completion qgroup_rescan_completion;
 	struct btrfs_work qgroup_rescan_work;
+	/* qgroup rescan worker is running or queued to run */
 	bool qgroup_rescan_running;	/* protected by qgroup_rescan_lock */
 
 	/* filesystem state */
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index aa259d6986e1..be491b6c020a 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -2072,6 +2072,30 @@ int btrfs_qgroup_account_extents(struct btrfs_trans_handle *trans,
 	return ret;
 }
 
+static void queue_rescan_worker(struct btrfs_fs_info *fs_info)
+{
+	mutex_lock(&fs_info->qgroup_rescan_lock);
+	if (btrfs_fs_closing(fs_info)) {
+		mutex_unlock(&fs_info->qgroup_rescan_lock);
+		return;
+	}
+	if (WARN_ON(fs_info->qgroup_rescan_running)) {
+		btrfs_warn(fs_info, "rescan worker already queued");
+		mutex_unlock(&fs_info->qgroup_rescan_lock);
+		return;
+	}
+
+	/*
+	 * Being queued is enough for btrfs_qgroup_wait_for_completion
+	 * to need to wait.
+	 */
+	fs_info->qgroup_rescan_running = true;
+	mutex_unlock(&fs_info->qgroup_rescan_lock);
+
+	btrfs_queue_work(fs_info->qgroup_rescan_workers,
+			 &fs_info->qgroup_rescan_work);
+}
+
 /*
  * called from commit_transaction. Writes all changed qgroups to disk.
  */
@@ -2123,8 +2147,7 @@ int btrfs_run_qgroups(struct btrfs_trans_handle *trans,
 		ret = qgroup_rescan_init(fs_info, 0, 1);
 		if (!ret) {
 			qgroup_rescan_zero_tracking(fs_info);
-			btrfs_queue_work(fs_info->qgroup_rescan_workers,
-					 &fs_info->qgroup_rescan_work);
+			queue_rescan_worker(fs_info);
 		}
 		ret = 0;
 	}
@@ -2713,7 +2736,6 @@ qgroup_rescan_init(struct btrfs_fs_info *fs_info, u64 progress_objectid,
 		sizeof(fs_info->qgroup_rescan_progress));
 	fs_info->qgroup_rescan_progress.objectid = progress_objectid;
 	init_completion(&fs_info->qgroup_rescan_completion);
-	fs_info->qgroup_rescan_running = true;
 
 	spin_unlock(&fs_info->qgroup_lock);
 	mutex_unlock(&fs_info->qgroup_rescan_lock);
@@ -2785,9 +2807,7 @@ btrfs_qgroup_rescan(struct btrfs_fs_info *fs_info)
 
 	qgroup_rescan_zero_tracking(fs_info);
 
-	btrfs_queue_work(fs_info->qgroup_rescan_workers,
-			 &fs_info->qgroup_rescan_work);
-
+	queue_rescan_worker(fs_info);
 	return 0;
 }
 
@@ -2798,9 +2818,7 @@ int btrfs_qgroup_wait_for_completion(struct btrfs_fs_info *fs_info,
 	int ret = 0;
 
 	mutex_lock(&fs_info->qgroup_rescan_lock);
-	spin_lock(&fs_info->qgroup_lock);
 	running = fs_info->qgroup_rescan_running;
-	spin_unlock(&fs_info->qgroup_lock);
 	mutex_unlock(&fs_info->qgroup_rescan_lock);
 
 	if (!running)
@@ -2819,12 +2837,10 @@ int btrfs_qgroup_wait_for_completion(struct btrfs_fs_info *fs_info,
  * this is only called from open_ctree where we're still single threaded, thus
  * locking is omitted here.
  */
-void
-btrfs_qgroup_rescan_resume(struct btrfs_fs_info *fs_info)
+void btrfs_qgroup_rescan_resume(struct btrfs_fs_info *fs_info)
 {
 	if (fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_RESCAN)
-		btrfs_queue_work(fs_info->qgroup_rescan_workers,
-				 &fs_info->qgroup_rescan_work);
+		queue_rescan_worker(fs_info);
 }
 
 /*
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2020-01-16  6:41 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-02 21:11 [PATCH v3 0/3] btrfs: qgroup rescan races (part 1) jeffm
2018-05-02 21:11 ` [PATCH 1/3] btrfs: qgroups, fix rescan worker running races jeffm
2018-05-03  7:24   ` Nikolay Borisov
2018-05-03 13:39     ` Jeff Mahoney
2018-05-03 15:52       ` Nikolay Borisov
2018-05-03 15:57         ` Jeff Mahoney
2018-05-10 19:49   ` Jeff Mahoney
2018-05-10 23:04   ` Jeff Mahoney
2020-01-16  6:41   ` Qu Wenruo
2018-05-02 21:11 ` [PATCH 2/3] btrfs: qgroups, remove unnecessary memset before btrfs_init_work jeffm
2018-05-02 21:11 ` [PATCH 3/3] btrfs: qgroup, don't try to insert status item after ENOMEM in rescan worker jeffm
2018-05-03  6:23 ` [PATCH v3 0/3] btrfs: qgroup rescan races (part 1) Nikolay Borisov
2018-05-03 22:27   ` Jeff Mahoney
2018-05-04  5:59     ` Nikolay Borisov
2018-05-04 13:32       ` Jeff Mahoney
2018-05-04 13:41         ` Nikolay Borisov
2019-11-28  3:28 ` Qu Wenruo
2019-12-03 19:32   ` David Sterba
  -- strict thread matches above, loose matches on Subject: below --
2018-04-26 19:23 [PATCH 1/3] btrfs: qgroups, fix rescan worker running races jeffm
2018-04-27  8:42 ` Nikolay Borisov
2018-04-27  8:48 ` Filipe Manana
2018-04-27 16:00   ` Jeff Mahoney
2018-04-27 15:56 ` David Sterba
2018-04-27 16:02   ` Jeff Mahoney
2018-04-27 16:40     ` David Sterba
2018-04-27 19:32       ` Jeff Mahoney
2018-04-28 17:09         ` David Sterba
2018-04-27 19:28   ` Noah Massey
2018-04-28 17:10     ` David Sterba
2018-04-30  6:20 ` Qu Wenruo
2018-04-30 14:07   ` Jeff Mahoney
2018-05-02 10:29 ` David Sterba
2018-05-02 13:15   ` David Sterba
2018-05-02 13:58     ` Jeff Mahoney

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.