linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* next round of blk-mq updates
@ 2014-04-16  7:44 Christoph Hellwig
  2014-04-16  7:44 ` [PATCH 1/8] blk-mq: allow drivers to hook into I/O completion Christoph Hellwig
                   ` (7 more replies)
  0 siblings, 8 replies; 15+ messages in thread
From: Christoph Hellwig @ 2014-04-16  7:44 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi

Hi Jens,

these are the final blk-mq changes for a fully working SCSI midlayer
using blk-mq.

Summary of the changes:

 - a new split I/O completion handler that allow the driver to free
   ressources when it knows a request will be fully completed, but
   before it has been freed
 - support for bidirectional requests, which is very trivial when
   used with the above split I/O completion handler.
 - support to requeue a request that already entered the driver,
   which is needed by the SCSI midlayer to support partial completions
   as well as various error conditions.
 - a couple of new ways to poke a queue:
	- an equivalent to blk_delay_queue to wake a stopped
	  queue after a delay
	- a new function to kick a queue that might be stopped or not
	- a parameter to blk_mq_start_stopped_hw_queues so that it can
	  be called from (soft)irq context


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/8] blk-mq: allow drivers to hook into I/O completion
  2014-04-16  7:44 next round of blk-mq updates Christoph Hellwig
@ 2014-04-16  7:44 ` Christoph Hellwig
  2014-04-16  7:44 ` [PATCH 2/8] blk-mq: bidi support Christoph Hellwig
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2014-04-16  7:44 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi

Split out the bottom half of blk_mq_end_io so that drivers can perform
work when they know a request has been completed, but before it has been
freed.  This also obsoletes blk_mq_end_io_partial as drivers can now
pass any value to blk_update_request directly.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c         |   16 ++++++++++------
 include/linux/blk-mq.h |    9 ++-------
 2 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 360a1d9..e921085 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -294,20 +294,24 @@ void blk_mq_clone_flush_request(struct request *flush_rq,
 		hctx->cmd_size);
 }
 
-bool blk_mq_end_io_partial(struct request *rq, int error, unsigned int nr_bytes)
+inline void __blk_mq_end_io(struct request *rq, int error)
 {
-	if (blk_update_request(rq, error, blk_rq_bytes(rq)))
-		return true;
-
 	blk_account_io_done(rq);
 
 	if (rq->end_io)
 		rq->end_io(rq, error);
 	else
 		blk_mq_free_request(rq);
-	return false;
 }
-EXPORT_SYMBOL(blk_mq_end_io_partial);
+EXPORT_SYMBOL(__blk_mq_end_io);
+
+void blk_mq_end_io(struct request *rq, int error)
+{
+	if (blk_update_request(rq, error, blk_rq_bytes(rq)))
+		BUG();
+	__blk_mq_end_io(rq, error);
+}
+EXPORT_SYMBOL(blk_mq_end_io);
 
 static void __blk_mq_complete_request_remote(void *data)
 {
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index a4ea0ce..a81b474 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -149,13 +149,8 @@ struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *, const int ctx_ind
 struct blk_mq_hw_ctx *blk_mq_alloc_single_hw_queue(struct blk_mq_tag_set *, unsigned int);
 void blk_mq_free_single_hw_queue(struct blk_mq_hw_ctx *, unsigned int);
 
-bool blk_mq_end_io_partial(struct request *rq, int error,
-		unsigned int nr_bytes);
-static inline void blk_mq_end_io(struct request *rq, int error)
-{
-	bool done = !blk_mq_end_io_partial(rq, error, blk_rq_bytes(rq));
-	BUG_ON(!done);
-}
+void blk_mq_end_io(struct request *rq, int error);
+void __blk_mq_end_io(struct request *rq, int error);
 
 void blk_mq_complete_request(struct request *rq);
 
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/8] blk-mq: bidi support
  2014-04-16  7:44 next round of blk-mq updates Christoph Hellwig
  2014-04-16  7:44 ` [PATCH 1/8] blk-mq: allow drivers to hook into I/O completion Christoph Hellwig
@ 2014-04-16  7:44 ` Christoph Hellwig
  2014-04-16 16:41   ` Jens Axboe
  2014-04-16  7:44 ` [PATCH 3/8] blk-mq: add async paramter to blk_mq_start_stopped_hw_queues Christoph Hellwig
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2014-04-16  7:44 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi

Add two unlinkely branches to make sure the resid is initialized correctly
for bidi request pairs, and the second request gets properly freed.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c |    9 +++++++--
 block/bsg.c    |    2 +-
 2 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index e921085..afdab13 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -298,10 +298,13 @@ inline void __blk_mq_end_io(struct request *rq, int error)
 {
 	blk_account_io_done(rq);
 
-	if (rq->end_io)
+	if (rq->end_io) {
 		rq->end_io(rq, error);
-	else
+	} else {
+		if (unlikely(blk_bidi_rq(rq)))
+			blk_mq_free_request(rq->next_rq);
 		blk_mq_free_request(rq);
+	}
 }
 EXPORT_SYMBOL(__blk_mq_end_io);
 
@@ -366,6 +369,8 @@ static void blk_mq_start_request(struct request *rq, bool last)
 	trace_block_rq_issue(q, rq);
 
 	rq->resid_len = blk_rq_bytes(rq);
+	if (unlikely(blk_bidi_rq(rq)))
+		rq->next_rq->resid_len = blk_rq_bytes(rq->next_rq);
 
 	/*
 	 * Just mark start time and set the started bit. Due to memory
diff --git a/block/bsg.c b/block/bsg.c
index 420a5a9..2956086 100644
--- a/block/bsg.c
+++ b/block/bsg.c
@@ -1008,7 +1008,7 @@ int bsg_register_queue(struct request_queue *q, struct device *parent,
 	/*
 	 * we need a proper transport to send commands, not a stacked device
 	 */
-	if (!q->request_fn)
+	if (!q->request_fn && !q->mq_ops)
 		return 0;
 
 	bcd = &q->bsg_dev;
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/8] blk-mq: add async paramter to blk_mq_start_stopped_hw_queues
  2014-04-16  7:44 next round of blk-mq updates Christoph Hellwig
  2014-04-16  7:44 ` [PATCH 1/8] blk-mq: allow drivers to hook into I/O completion Christoph Hellwig
  2014-04-16  7:44 ` [PATCH 2/8] blk-mq: bidi support Christoph Hellwig
@ 2014-04-16  7:44 ` Christoph Hellwig
  2014-04-16  7:44 ` [PATCH 4/8] blk-mq: add blk_mq_delay_queue Christoph Hellwig
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2014-04-16  7:44 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c             |    4 ++--
 drivers/block/virtio_blk.c |    4 ++--
 include/linux/blk-mq.h     |    2 +-
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index afdab13..8a080c2 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -698,7 +698,7 @@ void blk_mq_start_hw_queue(struct blk_mq_hw_ctx *hctx)
 }
 EXPORT_SYMBOL(blk_mq_start_hw_queue);
 
-void blk_mq_start_stopped_hw_queues(struct request_queue *q)
+void blk_mq_start_stopped_hw_queues(struct request_queue *q, bool async)
 {
 	struct blk_mq_hw_ctx *hctx;
 	int i;
@@ -709,7 +709,7 @@ void blk_mq_start_stopped_hw_queues(struct request_queue *q)
 
 		clear_bit(BLK_MQ_S_STOPPED, &hctx->state);
 		preempt_disable();
-		blk_mq_run_hw_queue(hctx, true);
+		blk_mq_run_hw_queue(hctx, async);
 		preempt_enable();
 	}
 }
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index f909a88..7a51f06 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -151,7 +151,7 @@ static void virtblk_done(struct virtqueue *vq)
 
 	/* In case queue is stopped waiting for more buffers. */
 	if (req_done)
-		blk_mq_start_stopped_hw_queues(vblk->disk->queue);
+		blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
 }
 
 static int virtio_queue_rq(struct blk_mq_hw_ctx *hctx, struct request *req)
@@ -762,7 +762,7 @@ static int virtblk_restore(struct virtio_device *vdev)
 	vblk->config_enable = true;
 	ret = init_vq(vdev->priv);
 	if (!ret)
-		blk_mq_start_stopped_hw_queues(vblk->disk->queue);
+		blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
 
 	return ret;
 }
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index a81b474..9ecfab9 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -157,7 +157,7 @@ void blk_mq_complete_request(struct request *rq);
 void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx);
 void blk_mq_start_hw_queue(struct blk_mq_hw_ctx *hctx);
 void blk_mq_stop_hw_queues(struct request_queue *q);
-void blk_mq_start_stopped_hw_queues(struct request_queue *q);
+void blk_mq_start_stopped_hw_queues(struct request_queue *q, bool async);
 
 /*
  * Driver command data is immediately after the request. So subtract request
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 4/8] blk-mq: add blk_mq_delay_queue
  2014-04-16  7:44 next round of blk-mq updates Christoph Hellwig
                   ` (2 preceding siblings ...)
  2014-04-16  7:44 ` [PATCH 3/8] blk-mq: add async paramter to blk_mq_start_stopped_hw_queues Christoph Hellwig
@ 2014-04-16  7:44 ` Christoph Hellwig
  2014-04-16 16:51   ` Jens Axboe
  2014-04-16  7:44 ` [PATCH 5/8] blk-mq: add blk_mq_start_hw_queues Christoph Hellwig
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2014-04-16  7:44 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi

Add a blk-mq equivalent to blk_delay_queue so that the scsi layer can ask
to be kicked again after a delay.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-core.c       |    6 ++++--
 block/blk-mq.c         |   47 +++++++++++++++++++++++++++++++++++++++++------
 include/linux/blk-mq.h |    4 +++-
 3 files changed, 48 insertions(+), 9 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index ae6227f..90b6e63 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -251,8 +251,10 @@ void blk_sync_queue(struct request_queue *q)
 		struct blk_mq_hw_ctx *hctx;
 		int i;
 
-		queue_for_each_hw_ctx(q, hctx, i)
-			cancel_delayed_work_sync(&hctx->delayed_work);
+		queue_for_each_hw_ctx(q, hctx, i) {
+			cancel_delayed_work_sync(&hctx->run_work);
+			cancel_delayed_work_sync(&hctx->delay_work);
+		}
 	} else {
 		cancel_delayed_work_sync(&q->delay_work);
 	}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 8a080c2..128b5e5 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -638,7 +638,7 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
 	if (!async && cpumask_test_cpu(smp_processor_id(), hctx->cpumask))
 		__blk_mq_run_hw_queue(hctx);
 	else if (hctx->queue->nr_hw_queues == 1)
-		kblockd_schedule_delayed_work(&hctx->delayed_work, 0);
+		kblockd_schedule_delayed_work(&hctx->run_work, 0);
 	else {
 		unsigned int cpu;
 
@@ -649,7 +649,7 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
 		 * just queue on the first CPU.
 		 */
 		cpu = cpumask_first(hctx->cpumask);
-		kblockd_schedule_delayed_work_on(cpu, &hctx->delayed_work, 0);
+		kblockd_schedule_delayed_work_on(cpu, &hctx->run_work, 0);
 	}
 }
 
@@ -673,7 +673,8 @@ EXPORT_SYMBOL(blk_mq_run_queues);
 
 void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx)
 {
-	cancel_delayed_work(&hctx->delayed_work);
+	cancel_delayed_work(&hctx->run_work);
+	cancel_delayed_work(&hctx->delay_work);
 	set_bit(BLK_MQ_S_STOPPED, &hctx->state);
 }
 EXPORT_SYMBOL(blk_mq_stop_hw_queue);
@@ -715,17 +716,50 @@ void blk_mq_start_stopped_hw_queues(struct request_queue *q, bool async)
 }
 EXPORT_SYMBOL(blk_mq_start_stopped_hw_queues);
 
-static void blk_mq_work_fn(struct work_struct *work)
+static void blk_mq_run_work_fn(struct work_struct *work)
 {
 	struct blk_mq_hw_ctx *hctx;
 
-	hctx = container_of(work, struct blk_mq_hw_ctx, delayed_work.work);
+	hctx = container_of(work, struct blk_mq_hw_ctx, run_work.work);
 
 	preempt_disable();
 	__blk_mq_run_hw_queue(hctx);
 	preempt_enable();
 }
 
+static void blk_mq_delay_work_fn(struct work_struct *work)
+{
+	struct blk_mq_hw_ctx *hctx;
+
+	hctx = container_of(work, struct blk_mq_hw_ctx, delay_work.work);
+
+	preempt_disable();
+	if (test_and_clear_bit(BLK_MQ_S_STOPPED, &hctx->state))
+		__blk_mq_run_hw_queue(hctx);
+	preempt_enable();
+}
+
+void blk_mq_delay_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs)
+{
+	unsigned long tmo = msecs_to_jiffies(msecs);
+
+	if (hctx->queue->nr_hw_queues == 1) {
+		kblockd_schedule_delayed_work(&hctx->delay_work, tmo);
+	} else {
+		unsigned int cpu;
+
+		/*
+		 * It'd be great if the workqueue API had a way to pass
+		 * in a mask and had some smarts for more clever placement
+		 * than the first CPU. Or we could round-robin here. For now,
+		 * just queue on the first CPU.
+		 */
+		cpu = cpumask_first(hctx->cpumask);
+		kblockd_schedule_delayed_work_on(cpu, &hctx->delay_work, tmo);
+	}
+}
+EXPORT_SYMBOL(blk_mq_delay_queue);
+
 static void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx,
 				    struct request *rq, bool at_head)
 {
@@ -1179,7 +1213,8 @@ static int blk_mq_init_hw_queues(struct request_queue *q,
 		if (node == NUMA_NO_NODE)
 			node = hctx->numa_node = set->numa_node;
 
-		INIT_DELAYED_WORK(&hctx->delayed_work, blk_mq_work_fn);
+		INIT_DELAYED_WORK(&hctx->run_work, blk_mq_run_work_fn);
+		INIT_DELAYED_WORK(&hctx->delay_work, blk_mq_delay_work_fn);
 		spin_lock_init(&hctx->lock);
 		INIT_LIST_HEAD(&hctx->dispatch);
 		hctx->queue = q;
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 9ecfab9..ae868e7 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -18,7 +18,8 @@ struct blk_mq_hw_ctx {
 	} ____cacheline_aligned_in_smp;
 
 	unsigned long		state;		/* BLK_MQ_S_* flags */
-	struct delayed_work	delayed_work;
+	struct delayed_work	run_work;
+	struct delayed_work	delay_work;
 	cpumask_var_t		cpumask;
 
 	unsigned long		flags;		/* BLK_MQ_F_* flags */
@@ -158,6 +159,7 @@ void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx);
 void blk_mq_start_hw_queue(struct blk_mq_hw_ctx *hctx);
 void blk_mq_stop_hw_queues(struct request_queue *q);
 void blk_mq_start_stopped_hw_queues(struct request_queue *q, bool async);
+void blk_mq_delay_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs);
 
 /*
  * Driver command data is immediately after the request. So subtract request
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 5/8] blk-mq: add blk_mq_start_hw_queues
  2014-04-16  7:44 next round of blk-mq updates Christoph Hellwig
                   ` (3 preceding siblings ...)
  2014-04-16  7:44 ` [PATCH 4/8] blk-mq: add blk_mq_delay_queue Christoph Hellwig
@ 2014-04-16  7:44 ` Christoph Hellwig
  2014-04-16  7:44 ` [PATCH 6/8] blk-mq: add blk_mq_requeue_request Christoph Hellwig
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2014-04-16  7:44 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi

Add a helper to unconditionally kick contexts of a queue.  This will
be needed by the SCSI layer to provide fair queueing between multiple
devices on a single host.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c         |   11 +++++++++++
 include/linux/blk-mq.h |    1 +
 2 files changed, 12 insertions(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 128b5e5..7946f0b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -699,6 +699,17 @@ void blk_mq_start_hw_queue(struct blk_mq_hw_ctx *hctx)
 }
 EXPORT_SYMBOL(blk_mq_start_hw_queue);
 
+void blk_mq_start_hw_queues(struct request_queue *q)
+{
+	struct blk_mq_hw_ctx *hctx;
+	int i;
+
+	queue_for_each_hw_ctx(q, hctx, i)
+		blk_mq_start_hw_queue(hctx);
+}
+EXPORT_SYMBOL(blk_mq_start_hw_queues);
+
+
 void blk_mq_start_stopped_hw_queues(struct request_queue *q, bool async)
 {
 	struct blk_mq_hw_ctx *hctx;
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index ae868e7..391377e 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -158,6 +158,7 @@ void blk_mq_complete_request(struct request *rq);
 void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx);
 void blk_mq_start_hw_queue(struct blk_mq_hw_ctx *hctx);
 void blk_mq_stop_hw_queues(struct request_queue *q);
+void blk_mq_start_hw_queues(struct request_queue *q);
 void blk_mq_start_stopped_hw_queues(struct request_queue *q, bool async);
 void blk_mq_delay_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs);
 
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 6/8] blk-mq: add blk_mq_requeue_request
  2014-04-16  7:44 next round of blk-mq updates Christoph Hellwig
                   ` (4 preceding siblings ...)
  2014-04-16  7:44 ` [PATCH 5/8] blk-mq: add blk_mq_start_hw_queues Christoph Hellwig
@ 2014-04-16  7:44 ` Christoph Hellwig
  2014-04-16  7:44 ` [PATCH 7/8] blk-mq: rename mq_flush_work struct request member Christoph Hellwig
  2014-04-16  7:44 ` [PATCH 8/8] block: export blk_finish_request Christoph Hellwig
  7 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2014-04-16  7:44 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi

This allows to requeue a request that has been accepted by ->queue_rq
earlier.  This is needed by the SCSI layer in various error conditions.

The existing internal blk_mq_requeue_request is renamed to
__blk_mq_requeue_request as it is a lower level building block for this
funtionality.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c         |   18 ++++++++++++++++--
 include/linux/blk-mq.h |    2 ++
 2 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 7946f0b..99d3795 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -400,7 +400,7 @@ static void blk_mq_start_request(struct request *rq, bool last)
 		rq->cmd_flags |= REQ_END;
 }
 
-static void blk_mq_requeue_request(struct request *rq)
+static void __blk_mq_requeue_request(struct request *rq)
 {
 	struct request_queue *q = rq->q;
 
@@ -413,6 +413,20 @@ static void blk_mq_requeue_request(struct request *rq)
 		rq->nr_phys_segments--;
 }
 
+void blk_mq_requeue_request(struct request *rq)
+{
+	struct request_queue *q = rq->q;
+
+	__blk_mq_requeue_request(rq);
+	blk_clear_rq_complete(rq);
+
+	trace_block_rq_requeue(q, rq);
+
+	BUG_ON(blk_queued_rq(rq));
+	blk_mq_insert_request(rq, true, true, false);
+}
+EXPORT_SYMBOL(blk_mq_requeue_request);
+
 struct request *blk_mq_tag_to_rq(struct blk_mq_tags *tags, unsigned int tag)
 {
 	return tags->rqs[tag];
@@ -600,7 +614,7 @@ static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx)
 			 * time
 			 */
 			list_add(&rq->queuelist, &rq_list);
-			blk_mq_requeue_request(rq);
+			__blk_mq_requeue_request(rq);
 			break;
 		default:
 			pr_err("blk-mq: bad return on queue: %d\n", ret);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 391377e..ab469d5 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -153,6 +153,8 @@ void blk_mq_free_single_hw_queue(struct blk_mq_hw_ctx *, unsigned int);
 void blk_mq_end_io(struct request *rq, int error);
 void __blk_mq_end_io(struct request *rq, int error);
 
+void blk_mq_requeue_request(struct request *rq);
+
 void blk_mq_complete_request(struct request *rq);
 
 void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx);
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 7/8] blk-mq: rename mq_flush_work struct request member
  2014-04-16  7:44 next round of blk-mq updates Christoph Hellwig
                   ` (5 preceding siblings ...)
  2014-04-16  7:44 ` [PATCH 6/8] blk-mq: add blk_mq_requeue_request Christoph Hellwig
@ 2014-04-16  7:44 ` Christoph Hellwig
  2014-04-16  7:44 ` [PATCH 8/8] block: export blk_finish_request Christoph Hellwig
  7 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2014-04-16  7:44 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi

We will use this work_struct to requeue scsi commands from the complention
handler as well, so give it a more generic name.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-flush.c      |    6 +++---
 include/linux/blkdev.h |    2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/block/blk-flush.c b/block/blk-flush.c
index c41fc19..ec7a224 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -134,7 +134,7 @@ static void mq_flush_run(struct work_struct *work)
 {
 	struct request *rq;
 
-	rq = container_of(work, struct request, mq_flush_work);
+	rq = container_of(work, struct request, requeue_work);
 
 	memset(&rq->csd, 0, sizeof(rq->csd));
 	blk_mq_insert_request(rq, false, true, false);
@@ -143,8 +143,8 @@ static void mq_flush_run(struct work_struct *work)
 static bool blk_flush_queue_rq(struct request *rq, bool add_front)
 {
 	if (rq->q->mq_ops) {
-		INIT_WORK(&rq->mq_flush_work, mq_flush_run);
-		kblockd_schedule_work(&rq->mq_flush_work);
+		INIT_WORK(&rq->requeue_work, mq_flush_run);
+		kblockd_schedule_work(&rq->requeue_work);
 		return false;
 	} else {
 		if (add_front)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 95bb551..7128808 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -98,7 +98,7 @@ struct request {
 	struct list_head queuelist;
 	union {
 		struct call_single_data csd;
-		struct work_struct mq_flush_work;
+		struct work_struct requeue_work;
 		unsigned long fifo_time;
 	};
 
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 8/8] block: export blk_finish_request
  2014-04-16  7:44 next round of blk-mq updates Christoph Hellwig
                   ` (6 preceding siblings ...)
  2014-04-16  7:44 ` [PATCH 7/8] blk-mq: rename mq_flush_work struct request member Christoph Hellwig
@ 2014-04-16  7:44 ` Christoph Hellwig
  7 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2014-04-16  7:44 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi

This allows to mirror the blk-mq code flow for more a more readable I/O
completion handler in SCSI.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-core.c       |    3 ++-
 include/linux/blkdev.h |    1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 90b6e63..c426970 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -2497,7 +2497,7 @@ EXPORT_SYMBOL_GPL(blk_unprep_request);
 /*
  * queue lock must be held
  */
-static void blk_finish_request(struct request *req, int error)
+void blk_finish_request(struct request *req, int error)
 {
 	if (blk_rq_tagged(req))
 		blk_queue_end_tag(req->q, req);
@@ -2523,6 +2523,7 @@ static void blk_finish_request(struct request *req, int error)
 		__blk_put_request(req->q, req);
 	}
 }
+EXPORT_SYMBOL(blk_finish_request);
 
 /**
  * blk_end_bidi_request - Complete a bidi request
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 7128808..20b26d4 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -936,6 +936,7 @@ extern struct request *blk_fetch_request(struct request_queue *q);
  */
 extern bool blk_update_request(struct request *rq, int error,
 			       unsigned int nr_bytes);
+extern void blk_finish_request(struct request *rq, int error);
 extern bool blk_end_request(struct request *rq, int error,
 			    unsigned int nr_bytes);
 extern void blk_end_request_all(struct request *rq, int error);
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/8] blk-mq: bidi support
  2014-04-16  7:44 ` [PATCH 2/8] blk-mq: bidi support Christoph Hellwig
@ 2014-04-16 16:41   ` Jens Axboe
  2014-04-16 16:44     ` Christoph Hellwig
  0 siblings, 1 reply; 15+ messages in thread
From: Jens Axboe @ 2014-04-16 16:41 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-kernel, linux-scsi

On 04/16/2014 01:44 AM, Christoph Hellwig wrote:
> diff --git a/block/bsg.c b/block/bsg.c
> index 420a5a9..2956086 100644
> --- a/block/bsg.c
> +++ b/block/bsg.c
> @@ -1008,7 +1008,7 @@ int bsg_register_queue(struct request_queue *q, struct device *parent,
>  	/*
>  	 * we need a proper transport to send commands, not a stacked device
>  	 */
> -	if (!q->request_fn)
> +	if (!q->request_fn && !q->mq_ops)
>  		return 0;
>  
>  	bcd = &q->bsg_dev;

This looks misplaced. But I dropped the one I generated last week, I'll
queue it up separately in the drivers branch.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/8] blk-mq: bidi support
  2014-04-16 16:41   ` Jens Axboe
@ 2014-04-16 16:44     ` Christoph Hellwig
  2014-04-16 16:45       ` Jens Axboe
  0 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2014-04-16 16:44 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, linux-scsi

On Wed, Apr 16, 2014 at 10:41:30AM -0600, Jens Axboe wrote:
> > -	if (!q->request_fn)
> > +	if (!q->request_fn && !q->mq_ops)
> >  		return 0;
> >  
> >  	bcd = &q->bsg_dev;
> 
> This looks misplaced. But I dropped the one I generated last week, I'll
> queue it up separately in the drivers branch.

Could be argued that it should be a separate patch, but the check should
work fine.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/8] blk-mq: bidi support
  2014-04-16 16:44     ` Christoph Hellwig
@ 2014-04-16 16:45       ` Jens Axboe
  2014-04-16 16:47         ` Christoph Hellwig
  0 siblings, 1 reply; 15+ messages in thread
From: Jens Axboe @ 2014-04-16 16:45 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-kernel, linux-scsi

On 04/16/2014 10:44 AM, Christoph Hellwig wrote:
> On Wed, Apr 16, 2014 at 10:41:30AM -0600, Jens Axboe wrote:
>>> -	if (!q->request_fn)
>>> +	if (!q->request_fn && !q->mq_ops)
>>>  		return 0;
>>>  
>>>  	bcd = &q->bsg_dev;
>>
>> This looks misplaced. But I dropped the one I generated last week, I'll
>> queue it up separately in the drivers branch.
> 
> Could be argued that it should be a separate patch, but the check should
> work fine.

There's nothing wrong with the check, it's identical to one I sent out
last week. But it's not part of the bidi enable patch, it's a separate
bug fix for bsg.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/8] blk-mq: bidi support
  2014-04-16 16:45       ` Jens Axboe
@ 2014-04-16 16:47         ` Christoph Hellwig
  2014-04-16 16:52           ` Jens Axboe
  0 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2014-04-16 16:47 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Christoph Hellwig, linux-kernel, linux-scsi

On Wed, Apr 16, 2014 at 10:45:03AM -0600, Jens Axboe wrote:
> There's nothing wrong with the check, it's identical to one I sent out
> last week. But it's not part of the bidi enable patch, it's a separate
> bug fix for bsg.

Do you want to split it up while applying or should I resend two
separate patches?

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/8] blk-mq: add blk_mq_delay_queue
  2014-04-16  7:44 ` [PATCH 4/8] blk-mq: add blk_mq_delay_queue Christoph Hellwig
@ 2014-04-16 16:51   ` Jens Axboe
  0 siblings, 0 replies; 15+ messages in thread
From: Jens Axboe @ 2014-04-16 16:51 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-kernel, linux-scsi

On 04/16/2014 01:44 AM, Christoph Hellwig wrote:
> Add a blk-mq equivalent to blk_delay_queue so that the scsi layer can ask
> to be kicked again after a delay.

Applied with the preempts bits from the delayed work handler removed.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/8] blk-mq: bidi support
  2014-04-16 16:47         ` Christoph Hellwig
@ 2014-04-16 16:52           ` Jens Axboe
  0 siblings, 0 replies; 15+ messages in thread
From: Jens Axboe @ 2014-04-16 16:52 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-kernel, linux-scsi

On 04/16/2014 10:47 AM, Christoph Hellwig wrote:
> On Wed, Apr 16, 2014 at 10:45:03AM -0600, Jens Axboe wrote:
>> There's nothing wrong with the check, it's identical to one I sent out
>> last week. But it's not part of the bidi enable patch, it's a separate
>> bug fix for bsg.
> 
> Do you want to split it up while applying or should I resend two
> separate patches?

I cut that part and applied, whole series is applied now.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2014-04-16 16:52 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-16  7:44 next round of blk-mq updates Christoph Hellwig
2014-04-16  7:44 ` [PATCH 1/8] blk-mq: allow drivers to hook into I/O completion Christoph Hellwig
2014-04-16  7:44 ` [PATCH 2/8] blk-mq: bidi support Christoph Hellwig
2014-04-16 16:41   ` Jens Axboe
2014-04-16 16:44     ` Christoph Hellwig
2014-04-16 16:45       ` Jens Axboe
2014-04-16 16:47         ` Christoph Hellwig
2014-04-16 16:52           ` Jens Axboe
2014-04-16  7:44 ` [PATCH 3/8] blk-mq: add async paramter to blk_mq_start_stopped_hw_queues Christoph Hellwig
2014-04-16  7:44 ` [PATCH 4/8] blk-mq: add blk_mq_delay_queue Christoph Hellwig
2014-04-16 16:51   ` Jens Axboe
2014-04-16  7:44 ` [PATCH 5/8] blk-mq: add blk_mq_start_hw_queues Christoph Hellwig
2014-04-16  7:44 ` [PATCH 6/8] blk-mq: add blk_mq_requeue_request Christoph Hellwig
2014-04-16  7:44 ` [PATCH 7/8] blk-mq: rename mq_flush_work struct request member Christoph Hellwig
2014-04-16  7:44 ` [PATCH 8/8] block: export blk_finish_request Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).