linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* fix queue_lock usage in blk-mq and nvme
@ 2015-05-07  7:38 Christoph Hellwig
  2015-05-07  7:38 ` [PATCH 1/2] block: use an atomic_t for mq_freeze_depth Christoph Hellwig
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Christoph Hellwig @ 2015-05-07  7:38 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-nvme, linux-kernel

Historically we always take queue_lock with irqs disabled.  Blk-mq doesn't really
use the queue_lock much, but when it does it needs to follow this rules to make
lockdep happy.

The first patch removes a queue_lock usage instead of fixing things properly,
and the second is a bad-aid for nvme.  In the long run I'd prefer to remove
other users of the queue_lock from blk-mq and blk-mq based driver entirely,
but that will require a bit more work.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/2] block: use an atomic_t for mq_freeze_depth
  2015-05-07  7:38 fix queue_lock usage in blk-mq and nvme Christoph Hellwig
@ 2015-05-07  7:38 ` Christoph Hellwig
  2015-05-07  7:38 ` [PATCH 2/2] nvme: disable irqs in nvme_freeze_queues Christoph Hellwig
  2015-05-19  6:37 ` fix queue_lock usage in blk-mq and nvme Christoph Hellwig
  2 siblings, 0 replies; 5+ messages in thread
From: Christoph Hellwig @ 2015-05-07  7:38 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-nvme, linux-kernel

lockdep gets unhappy about the not disabling irqs when using the queue_lock
around it.  Instead of trying to fix that up just switch to an atomic_t
and get rid of the lock.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c         | 24 ++++++++++--------------
 include/linux/blkdev.h |  2 +-
 2 files changed, 11 insertions(+), 15 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index ade8a2d..9f554bb 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -89,7 +89,8 @@ static int blk_mq_queue_enter(struct request_queue *q, gfp_t gfp)
 			return -EBUSY;
 
 		ret = wait_event_interruptible(q->mq_freeze_wq,
-				!q->mq_freeze_depth || blk_queue_dying(q));
+				!atomic_read(&q->mq_freeze_depth) ||
+				blk_queue_dying(q));
 		if (blk_queue_dying(q))
 			return -ENODEV;
 		if (ret)
@@ -112,13 +113,10 @@ static void blk_mq_usage_counter_release(struct percpu_ref *ref)
 
 void blk_mq_freeze_queue_start(struct request_queue *q)
 {
-	bool freeze;
+	int freeze_depth;
 
-	spin_lock_irq(q->queue_lock);
-	freeze = !q->mq_freeze_depth++;
-	spin_unlock_irq(q->queue_lock);
-
-	if (freeze) {
+	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
+	if (freeze_depth == 1) {
 		percpu_ref_kill(&q->mq_usage_counter);
 		blk_mq_run_hw_queues(q, false);
 	}
@@ -143,13 +141,11 @@ EXPORT_SYMBOL_GPL(blk_mq_freeze_queue);
 
 void blk_mq_unfreeze_queue(struct request_queue *q)
 {
-	bool wake;
+	int freeze_depth;
 
-	spin_lock_irq(q->queue_lock);
-	wake = !--q->mq_freeze_depth;
-	WARN_ON_ONCE(q->mq_freeze_depth < 0);
-	spin_unlock_irq(q->queue_lock);
-	if (wake) {
+	freeze_depth = atomic_dec_return(&q->mq_freeze_depth);
+	WARN_ON_ONCE(freeze_depth < 0);
+	if (!freeze_depth) {
 		percpu_ref_reinit(&q->mq_usage_counter);
 		wake_up_all(&q->mq_freeze_wq);
 	}
@@ -2047,7 +2043,7 @@ void blk_mq_free_queue(struct request_queue *q)
 /* Basically redo blk_mq_init_queue with queue frozen */
 static void blk_mq_queue_reinit(struct request_queue *q)
 {
-	WARN_ON_ONCE(!q->mq_freeze_depth);
+	WARN_ON_ONCE(!atomic_read(&q->mq_freeze_depth));
 
 	blk_mq_sysfs_unregister(q);
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index a7f7c23..a10bed8 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -443,7 +443,7 @@ struct request_queue {
 	struct mutex		sysfs_lock;
 
 	int			bypass_depth;
-	int			mq_freeze_depth;
+	atomic_t		mq_freeze_depth;
 
 #if defined(CONFIG_BLK_DEV_BSG)
 	bsg_job_fn		*bsg_job_fn;
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/2] nvme: disable irqs in nvme_freeze_queues
  2015-05-07  7:38 fix queue_lock usage in blk-mq and nvme Christoph Hellwig
  2015-05-07  7:38 ` [PATCH 1/2] block: use an atomic_t for mq_freeze_depth Christoph Hellwig
@ 2015-05-07  7:38 ` Christoph Hellwig
  2015-05-19  6:37 ` fix queue_lock usage in blk-mq and nvme Christoph Hellwig
  2 siblings, 0 replies; 5+ messages in thread
From: Christoph Hellwig @ 2015-05-07  7:38 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-nvme, linux-kernel

The queue_lock needs to be taken with irqs disabled.  This is mostly
due to the old pre blk-mq usage pattern, but we've also picked it up
in most of the few places where we use the queue_lock with blk-mq.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/nvme-core.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
index 85b8036..00e6419 100644
--- a/drivers/block/nvme-core.c
+++ b/drivers/block/nvme-core.c
@@ -2585,9 +2585,9 @@ static void nvme_freeze_queues(struct nvme_dev *dev)
 	list_for_each_entry(ns, &dev->namespaces, list) {
 		blk_mq_freeze_queue_start(ns->queue);
 
-		spin_lock(ns->queue->queue_lock);
+		spin_lock_irq(ns->queue->queue_lock);
 		queue_flag_set(QUEUE_FLAG_STOPPED, ns->queue);
-		spin_unlock(ns->queue->queue_lock);
+		spin_unlock_irq(ns->queue->queue_lock);
 
 		blk_mq_cancel_requeue_work(ns->queue);
 		blk_mq_stop_hw_queues(ns->queue);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: fix queue_lock usage in blk-mq and nvme
  2015-05-07  7:38 fix queue_lock usage in blk-mq and nvme Christoph Hellwig
  2015-05-07  7:38 ` [PATCH 1/2] block: use an atomic_t for mq_freeze_depth Christoph Hellwig
  2015-05-07  7:38 ` [PATCH 2/2] nvme: disable irqs in nvme_freeze_queues Christoph Hellwig
@ 2015-05-19  6:37 ` Christoph Hellwig
  2015-05-19 15:14   ` Jens Axboe
  2 siblings, 1 reply; 5+ messages in thread
From: Christoph Hellwig @ 2015-05-19  6:37 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, linux-kernel, linux-nvme

ping?

On Thu, May 07, 2015 at 09:38:12AM +0200, Christoph Hellwig wrote:
> Historically we always take queue_lock with irqs disabled.  Blk-mq doesn't really
> use the queue_lock much, but when it does it needs to follow this rules to make
> lockdep happy.
> 
> The first patch removes a queue_lock usage instead of fixing things properly,
> and the second is a bad-aid for nvme.  In the long run I'd prefer to remove
> other users of the queue_lock from blk-mq and blk-mq based driver entirely,
> but that will require a bit more work.
> 
> 
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
---end quoted text---

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: fix queue_lock usage in blk-mq and nvme
  2015-05-19  6:37 ` fix queue_lock usage in blk-mq and nvme Christoph Hellwig
@ 2015-05-19 15:14   ` Jens Axboe
  0 siblings, 0 replies; 5+ messages in thread
From: Jens Axboe @ 2015-05-19 15:14 UTC (permalink / raw)
  To: Christoph Hellwig, Christoph Hellwig; +Cc: linux-kernel, linux-nvme

On 05/19/2015 12:37 AM, Christoph Hellwig wrote:
> ping?

Sorry, both applied, looks fine to me and wont impact fast path.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-05-19 15:14 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-07  7:38 fix queue_lock usage in blk-mq and nvme Christoph Hellwig
2015-05-07  7:38 ` [PATCH 1/2] block: use an atomic_t for mq_freeze_depth Christoph Hellwig
2015-05-07  7:38 ` [PATCH 2/2] nvme: disable irqs in nvme_freeze_queues Christoph Hellwig
2015-05-19  6:37 ` fix queue_lock usage in blk-mq and nvme Christoph Hellwig
2015-05-19 15:14   ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).