All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHSET block:for-3.2/core] further updates to blk_cleanup_queue()
@ 2011-10-21  3:56 Tejun Heo
  2011-10-21  3:56 ` [PATCH 1/6] block, sx8: kill blk_insert_request() Tejun Heo
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Tejun Heo @ 2011-10-21  3:56 UTC (permalink / raw)
  To: axboe, vgoyal, jgarzik, davem; +Cc: linux-kernel, ctalbott, rni

Hello,

Patchset "fix request_queue life-cycle management"[1] tried to fix
lifecycle management by making blk_cleanup_queue() drain and shut down
the queue; however, there still are some holes.  This patchset
tightens externally visible API a bit and plugs those holes.

 0001-block-sx8-kill-blk_insert_request.patch
 0002-block-allow-blk_execute_rq_nowait-to-be-called-form-.patch
 0003-block-ide-unexport-elv_add_request.patch
 0004-block-add-blk_queue_dead.patch
 0005-block-fix-drain_all-condition-in-blk_drain_queue.patch
 0006-block-add-missing-blk_queue_dead-checks.patch

0001-0003 remove/unexport two request insertion functions which don't
have proper DEAD check.  Users are switched to
blk_execute_rq_nowait().

0004 adds blk_queue_dead() macro for convenience.

0005 updates blk_drain_queue() such that it also waits for requests
which weren't allocated from block layer.

0006 adds missing DEAD checks.

This patchset is on top of the block:for-3.2/core 3bcfeaf93f4 "block:
initialize the bounce pool if high memory may be added later" and
available in the following git branch.

 git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git block-ref

diffstat follows.

 block/blk-core.c         |  105 ++++++++++++++++++++---------------------------
 block/blk-exec.c         |   15 ++++--
 block/blk-flush.c        |    2 
 block/blk-sysfs.c        |    4 -
 block/blk-throttle.c     |    4 -
 block/blk.h              |    3 -
 block/elevator.c         |   16 +------
 drivers/block/sx8.c      |   12 +++--
 drivers/ide/ide-atapi.c  |    7 +--
 drivers/ide/ide-park.c   |    2 
 include/linux/blkdev.h   |    2 
 include/linux/elevator.h |    2 
 12 files changed, 78 insertions(+), 96 deletions(-)

Thanks.

--
tejun

[1] http://thread.gmane.org/gmane.linux.kernel/1205150

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/6] block, sx8: kill blk_insert_request()
  2011-10-21  3:56 [PATCHSET block:for-3.2/core] further updates to blk_cleanup_queue() Tejun Heo
@ 2011-10-21  3:56 ` Tejun Heo
  2011-10-21  4:03   ` Jeff Garzik
  2011-10-21  3:56 ` [PATCH 2/6] block: allow blk_execute_rq_nowait() to be called form IRQ context Tejun Heo
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Tejun Heo @ 2011-10-21  3:56 UTC (permalink / raw)
  To: axboe, vgoyal, jgarzik, davem; +Cc: linux-kernel, ctalbott, rni, Tejun Heo

The only user left for blk_insert_request() is sx8 and it can be
trivially switched to use blk_execute_rq_nowait() - special requests
aren't included in io stat and sx8 doesn't use block layer tagging.
Switch sx8 and kill blk_insert_requeset().

This patch doesn't introduce any functional difference.

Only compile tested.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jeff Garzik <jgarzik@pobox.com>
Cc: Jens Axboe <axboe@kernel.dk>
---
 block/blk-core.c       |   48 ------------------------------------------------
 drivers/block/sx8.c    |   12 ++++++++----
 include/linux/blkdev.h |    1 -
 3 files changed, 8 insertions(+), 53 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 7e15235..0f3baf4 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1005,54 +1005,6 @@ static void add_acct_request(struct request_queue *q, struct request *rq,
 	__elv_add_request(q, rq, where);
 }
 
-/**
- * blk_insert_request - insert a special request into a request queue
- * @q:		request queue where request should be inserted
- * @rq:		request to be inserted
- * @at_head:	insert request at head or tail of queue
- * @data:	private data
- *
- * Description:
- *    Many block devices need to execute commands asynchronously, so they don't
- *    block the whole kernel from preemption during request execution.  This is
- *    accomplished normally by inserting aritficial requests tagged as
- *    REQ_TYPE_SPECIAL in to the corresponding request queue, and letting them
- *    be scheduled for actual execution by the request queue.
- *
- *    We have the option of inserting the head or the tail of the queue.
- *    Typically we use the tail for new ioctls and so forth.  We use the head
- *    of the queue for things like a QUEUE_FULL message from a device, or a
- *    host that is unable to accept a particular command.
- */
-void blk_insert_request(struct request_queue *q, struct request *rq,
-			int at_head, void *data)
-{
-	int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
-	unsigned long flags;
-
-	/*
-	 * tell I/O scheduler that this isn't a regular read/write (ie it
-	 * must not attempt merges on this) and that it acts as a soft
-	 * barrier
-	 */
-	rq->cmd_type = REQ_TYPE_SPECIAL;
-
-	rq->special = data;
-
-	spin_lock_irqsave(q->queue_lock, flags);
-
-	/*
-	 * If command is tagged, release the tag
-	 */
-	if (blk_rq_tagged(rq))
-		blk_queue_end_tag(q, rq);
-
-	add_acct_request(q, rq, where);
-	__blk_run_queue(q);
-	spin_unlock_irqrestore(q->queue_lock, flags);
-}
-EXPORT_SYMBOL(blk_insert_request);
-
 static void part_round_stats_single(int cpu, struct hd_struct *part,
 				    unsigned long now)
 {
diff --git a/drivers/block/sx8.c b/drivers/block/sx8.c
index b70f0fc..e7472f5 100644
--- a/drivers/block/sx8.c
+++ b/drivers/block/sx8.c
@@ -619,8 +619,10 @@ static int carm_array_info (struct carm_host *host, unsigned int array_idx)
 	       host->state == HST_DEV_SCAN);
 	spin_unlock_irq(&host->lock);
 
-	DPRINTK("blk_insert_request, tag == %u\n", idx);
-	blk_insert_request(host->oob_q, crq->rq, 1, crq);
+	DPRINTK("blk_execute_rq_nowait, tag == %u\n", idx);
+	crq->rq->cmd_type = REQ_TYPE_SPECIAL;
+	crq->rq->special = crq;
+	blk_execute_rq_nowait(host->oob_q, NULL, crq->rq, true, NULL);
 
 	return 0;
 
@@ -658,8 +660,10 @@ static int carm_send_special (struct carm_host *host, carm_sspc_t func)
 	BUG_ON(rc < 0);
 	crq->msg_bucket = (u32) rc;
 
-	DPRINTK("blk_insert_request, tag == %u\n", idx);
-	blk_insert_request(host->oob_q, crq->rq, 1, crq);
+	DPRINTK("blk_execute_rq_nowait, tag == %u\n", idx);
+	crq->rq->cmd_type = REQ_TYPE_SPECIAL;
+	crq->rq->special = crq;
+	blk_execute_rq_nowait(host->oob_q, NULL, crq->rq, true, NULL);
 
 	return 0;
 }
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 5267cd2..45563b0 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -660,7 +660,6 @@ extern void __blk_put_request(struct request_queue *, struct request *);
 extern struct request *blk_get_request(struct request_queue *, int, gfp_t);
 extern struct request *blk_make_request(struct request_queue *, struct bio *,
 					gfp_t);
-extern void blk_insert_request(struct request_queue *, struct request *, int, void *);
 extern void blk_requeue_request(struct request_queue *, struct request *);
 extern void blk_add_request_payload(struct request *rq, struct page *page,
 		unsigned int len);
-- 
1.7.3.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/6] block: allow blk_execute_rq_nowait() to be called form IRQ context
  2011-10-21  3:56 [PATCHSET block:for-3.2/core] further updates to blk_cleanup_queue() Tejun Heo
  2011-10-21  3:56 ` [PATCH 1/6] block, sx8: kill blk_insert_request() Tejun Heo
@ 2011-10-21  3:56 ` Tejun Heo
  2011-10-21  9:20   ` Christoph Hellwig
  2011-10-21  3:56 ` [PATCH 3/6] block, ide: unexport elv_add_request() Tejun Heo
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Tejun Heo @ 2011-10-21  3:56 UTC (permalink / raw)
  To: axboe, vgoyal, jgarzik, davem; +Cc: linux-kernel, ctalbott, rni, Tejun Heo

Currently blk_execute_rq_nowait() directly calls __blk_run_queue() and
thus can't be called from IRQ context.  This patch updates it to use
blk_run_queue_async() instead.  This will be used to unexport
elv_add_request().

This changes how queue is kicked after blk_execute_rq_nowait() but
it's hardly a hot path and the effect shouldn't be noticeable.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
---
 block/blk-exec.c |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/block/blk-exec.c b/block/blk-exec.c
index a1ebceb..056f097 100644
--- a/block/blk-exec.c
+++ b/block/blk-exec.c
@@ -49,6 +49,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
 			   rq_end_io_fn *done)
 {
 	int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
+	unsigned long flags;
 
 	if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
 		rq->errors = -ENXIO;
@@ -59,14 +60,13 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
 
 	rq->rq_disk = bd_disk;
 	rq->end_io = done;
-	WARN_ON(irqs_disabled());
-	spin_lock_irq(q->queue_lock);
+	spin_lock_irqsave(q->queue_lock, flags);
 	__elv_add_request(q, rq, where);
-	__blk_run_queue(q);
+	blk_run_queue_async(q);
 	/* the queue is stopped so it won't be run */
 	if (rq->cmd_type == REQ_TYPE_PM_RESUME)
 		q->request_fn(q);
-	spin_unlock_irq(q->queue_lock);
+	spin_unlock_irqrestore(q->queue_lock, flags);
 }
 EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
 
-- 
1.7.3.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/6] block, ide: unexport elv_add_request()
  2011-10-21  3:56 [PATCHSET block:for-3.2/core] further updates to blk_cleanup_queue() Tejun Heo
  2011-10-21  3:56 ` [PATCH 1/6] block, sx8: kill blk_insert_request() Tejun Heo
  2011-10-21  3:56 ` [PATCH 2/6] block: allow blk_execute_rq_nowait() to be called form IRQ context Tejun Heo
@ 2011-10-21  3:56 ` Tejun Heo
  2011-10-21  3:56 ` [PATCH 4/6] block: add blk_queue_dead() Tejun Heo
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Tejun Heo @ 2011-10-21  3:56 UTC (permalink / raw)
  To: axboe, vgoyal, jgarzik, davem; +Cc: linux-kernel, ctalbott, rni, Tejun Heo

The only elv_add_request() user outside of block layer proper is
ide-atapi and ide-park.  Now that blk_execute_rq_nowait() is allowed
from irq context, they can both be switched to
blk_execute_rq_nowait().  Switch IDE users, make [__]elv_add_request()
internal to block and drop the queue_lock grabbing version.

IDE changes lightly tested.  Block layer changes doesn't introduce any
behavior change.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jens Axboe <axboe@kernel.dk>
---
 block/blk-core.c         |    6 +++---
 block/blk-exec.c         |    2 +-
 block/blk-flush.c        |    2 +-
 block/blk.h              |    1 +
 block/elevator.c         |   16 +++-------------
 drivers/ide/ide-atapi.c  |    7 ++++---
 drivers/ide/ide-park.c   |    2 +-
 include/linux/elevator.h |    2 --
 8 files changed, 14 insertions(+), 24 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 0f3baf4..024cf4a 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1002,7 +1002,7 @@ static void add_acct_request(struct request_queue *q, struct request *rq,
 			     int where)
 {
 	drive_stat_acct(rq, 1);
-	__elv_add_request(q, rq, where);
+	elv_add_request(q, rq, where);
 }
 
 static void part_round_stats_single(int cpu, struct hd_struct *part,
@@ -2763,9 +2763,9 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
 		 * rq is already accounted, so use raw insert
 		 */
 		if (rq->cmd_flags & (REQ_FLUSH | REQ_FUA))
-			__elv_add_request(q, rq, ELEVATOR_INSERT_FLUSH);
+			elv_add_request(q, rq, ELEVATOR_INSERT_FLUSH);
 		else
-			__elv_add_request(q, rq, ELEVATOR_INSERT_SORT_MERGE);
+			elv_add_request(q, rq, ELEVATOR_INSERT_SORT_MERGE);
 
 		depth++;
 	}
diff --git a/block/blk-exec.c b/block/blk-exec.c
index 056f097..5324501 100644
--- a/block/blk-exec.c
+++ b/block/blk-exec.c
@@ -61,7 +61,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
 	rq->rq_disk = bd_disk;
 	rq->end_io = done;
 	spin_lock_irqsave(q->queue_lock, flags);
-	__elv_add_request(q, rq, where);
+	elv_add_request(q, rq, where);
 	blk_run_queue_async(q);
 	/* the queue is stopped so it won't be run */
 	if (rq->cmd_type == REQ_TYPE_PM_RESUME)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 491eb30..ab01265 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -288,7 +288,7 @@ static void flush_data_end_io(struct request *rq, int error)
  * blk_insert_flush - insert a new FLUSH/FUA request
  * @rq: request to insert
  *
- * To be called from __elv_add_request() for %ELEVATOR_INSERT_FLUSH insertions.
+ * To be called from elv_add_request() for %ELEVATOR_INSERT_FLUSH insertions.
  * @rq is being submitted.  Analyze what needs to be done and put it on the
  * right queue.
  *
diff --git a/block/blk.h b/block/blk.h
index 3f6551b..77ad885 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -52,6 +52,7 @@ static inline void blk_clear_rq_complete(struct request *rq)
  */
 #define ELV_ON_HASH(rq)		(!hlist_unhashed(&(rq)->hash))
 
+void elv_add_request(struct request_queue *q, struct request *rq, int where);
 void blk_insert_flush(struct request *rq);
 void blk_abort_flushes(struct request_queue *q);
 
diff --git a/block/elevator.c b/block/elevator.c
index 66343d6..51447dd 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -599,7 +599,7 @@ void elv_requeue_request(struct request_queue *q, struct request *rq)
 
 	rq->cmd_flags &= ~REQ_STARTED;
 
-	__elv_add_request(q, rq, ELEVATOR_INSERT_REQUEUE);
+	elv_add_request(q, rq, ELEVATOR_INSERT_REQUEUE);
 }
 
 void elv_drain_elevator(struct request_queue *q)
@@ -636,8 +636,9 @@ void elv_quiesce_end(struct request_queue *q)
 	spin_unlock_irq(q->queue_lock);
 }
 
-void __elv_add_request(struct request_queue *q, struct request *rq, int where)
+void elv_add_request(struct request_queue *q, struct request *rq, int where)
 {
+	lockdep_is_held(q->queue_lock);
 	trace_block_rq_insert(q, rq);
 
 	rq->q = q;
@@ -715,17 +716,6 @@ void __elv_add_request(struct request_queue *q, struct request *rq, int where)
 		BUG();
 	}
 }
-EXPORT_SYMBOL(__elv_add_request);
-
-void elv_add_request(struct request_queue *q, struct request *rq, int where)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(q->queue_lock, flags);
-	__elv_add_request(q, rq, where);
-	spin_unlock_irqrestore(q->queue_lock, flags);
-}
-EXPORT_SYMBOL(elv_add_request);
 
 struct request *elv_latter_request(struct request_queue *q, struct request *rq)
 {
diff --git a/drivers/ide/ide-atapi.c b/drivers/ide/ide-atapi.c
index 6f218e01..2f1b474 100644
--- a/drivers/ide/ide-atapi.c
+++ b/drivers/ide/ide-atapi.c
@@ -221,6 +221,8 @@ EXPORT_SYMBOL_GPL(ide_prep_sense);
 
 int ide_queue_sense_rq(ide_drive_t *drive, void *special)
 {
+	struct request *rq = &drive->sense_rq;
+
 	/* deferred failure from ide_prep_sense() */
 	if (!drive->sense_rq_armed) {
 		printk(KERN_WARNING PFX "%s: error queuing a sense request\n",
@@ -228,12 +230,11 @@ int ide_queue_sense_rq(ide_drive_t *drive, void *special)
 		return -ENOMEM;
 	}
 
-	drive->sense_rq.special = special;
 	drive->sense_rq_armed = false;
-
 	drive->hwif->rq = NULL;
 
-	elv_add_request(drive->queue, &drive->sense_rq, ELEVATOR_INSERT_FRONT);
+	rq->special = special;
+	blk_execute_rq_nowait(drive->queue, rq->rq_disk, rq, true, NULL);
 	return 0;
 }
 EXPORT_SYMBOL_GPL(ide_queue_sense_rq);
diff --git a/drivers/ide/ide-park.c b/drivers/ide/ide-park.c
index 6ab9ab2..0c64957 100644
--- a/drivers/ide/ide-park.c
+++ b/drivers/ide/ide-park.c
@@ -52,7 +52,7 @@ static void issue_park_cmd(ide_drive_t *drive, unsigned long timeout)
 	rq->cmd[0] = REQ_UNPARK_HEADS;
 	rq->cmd_len = 1;
 	rq->cmd_type = REQ_TYPE_SPECIAL;
-	elv_add_request(q, rq, ELEVATOR_INSERT_FRONT);
+	blk_execute_rq_nowait(q, NULL, rq, true, NULL);
 
 out:
 	return;
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index 1d0f7a2..34d71f5 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -105,8 +105,6 @@ struct elevator_queue
  */
 extern void elv_dispatch_sort(struct request_queue *, struct request *);
 extern void elv_dispatch_add_tail(struct request_queue *, struct request *);
-extern void elv_add_request(struct request_queue *, struct request *, int);
-extern void __elv_add_request(struct request_queue *, struct request *, int);
 extern int elv_merge(struct request_queue *, struct request **, struct bio *);
 extern int elv_try_merge(struct request *, struct bio *);
 extern void elv_merge_requests(struct request_queue *, struct request *,
-- 
1.7.3.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 4/6] block: add blk_queue_dead()
  2011-10-21  3:56 [PATCHSET block:for-3.2/core] further updates to blk_cleanup_queue() Tejun Heo
                   ` (2 preceding siblings ...)
  2011-10-21  3:56 ` [PATCH 3/6] block, ide: unexport elv_add_request() Tejun Heo
@ 2011-10-21  3:56 ` Tejun Heo
  2011-10-21  3:56 ` [PATCH 5/6] block: fix drain_all condition in blk_drain_queue() Tejun Heo
  2011-10-21  3:56 ` [PATCH 6/6] block: add missing blk_queue_dead() checks Tejun Heo
  5 siblings, 0 replies; 14+ messages in thread
From: Tejun Heo @ 2011-10-21  3:56 UTC (permalink / raw)
  To: axboe, vgoyal, jgarzik, davem; +Cc: linux-kernel, ctalbott, rni, Tejun Heo

There are a number of QUEUE_FLAG_DEAD tests.  Add blk_queue_dead()
macro and use it.

This patch doesn't introduce any functional difference.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
---
 block/blk-core.c       |    6 +++---
 block/blk-exec.c       |    2 +-
 block/blk-sysfs.c      |    4 ++--
 block/blk-throttle.c   |    4 ++--
 block/blk.h            |    2 +-
 include/linux/blkdev.h |    1 +
 6 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 024cf4a..abf751c 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -603,7 +603,7 @@ EXPORT_SYMBOL(blk_init_allocated_queue_node);
 
 int blk_get_queue(struct request_queue *q)
 {
-	if (likely(!test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
+	if (likely(!blk_queue_dead(q))) {
 		kobject_get(&q->kobj);
 		return 0;
 	}
@@ -750,7 +750,7 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
 	const bool is_sync = rw_is_sync(rw_flags) != 0;
 	int may_queue;
 
-	if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)))
+	if (unlikely(blk_queue_dead(q)))
 		return NULL;
 
 	may_queue = elv_may_queue(q, rw_flags);
@@ -870,7 +870,7 @@ static struct request *get_request_wait(struct request_queue *q, int rw_flags,
 		struct io_context *ioc;
 		struct request_list *rl = &q->rq;
 
-		if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)))
+		if (unlikely(blk_queue_dead(q)))
 			return NULL;
 
 		prepare_to_wait_exclusive(&rl->wait[is_sync], &wait,
diff --git a/block/blk-exec.c b/block/blk-exec.c
index 5324501..d11c0b8 100644
--- a/block/blk-exec.c
+++ b/block/blk-exec.c
@@ -51,7 +51,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
 	int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
 	unsigned long flags;
 
-	if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
+	if (unlikely(blk_queue_dead(q))) {
 		rq->errors = -ENXIO;
 		if (rq->end_io)
 			rq->end_io(rq, rq->errors);
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index e7f9f65..f0b2ca8 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -425,7 +425,7 @@ queue_attr_show(struct kobject *kobj, struct attribute *attr, char *page)
 	if (!entry->show)
 		return -EIO;
 	mutex_lock(&q->sysfs_lock);
-	if (test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)) {
+	if (blk_queue_dead(q)) {
 		mutex_unlock(&q->sysfs_lock);
 		return -ENOENT;
 	}
@@ -447,7 +447,7 @@ queue_attr_store(struct kobject *kobj, struct attribute *attr,
 
 	q = container_of(kobj, struct request_queue, kobj);
 	mutex_lock(&q->sysfs_lock);
-	if (test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)) {
+	if (blk_queue_dead(q)) {
 		mutex_unlock(&q->sysfs_lock);
 		return -ENOENT;
 	}
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 8edb949..f799fa2 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -310,7 +310,7 @@ static struct throtl_grp * throtl_get_tg(struct throtl_data *td)
 	struct request_queue *q = td->queue;
 
 	/* no throttling for dead queue */
-	if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)))
+	if (unlikely(blk_queue_dead(q)))
 		return NULL;
 
 	rcu_read_lock();
@@ -335,7 +335,7 @@ static struct throtl_grp * throtl_get_tg(struct throtl_data *td)
 	spin_lock_irq(q->queue_lock);
 
 	/* Make sure @q is still alive */
-	if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
+	if (unlikely(blk_queue_dead(q))) {
 		kfree(tg);
 		return NULL;
 	}
diff --git a/block/blk.h b/block/blk.h
index 77ad885..f8c797c 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -86,7 +86,7 @@ static inline struct request *__elv_next_request(struct request_queue *q)
 			q->flush_queue_delayed = 1;
 			return NULL;
 		}
-		if (test_bit(QUEUE_FLAG_DEAD, &q->queue_flags) ||
+		if (unlikely(blk_queue_dead(q)) ||
 		    !q->elevator->ops->elevator_dispatch_fn(q, 0))
 			return NULL;
 	}
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 45563b0..6a4ab84 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -481,6 +481,7 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q)
 
 #define blk_queue_tagged(q)	test_bit(QUEUE_FLAG_QUEUED, &(q)->queue_flags)
 #define blk_queue_stopped(q)	test_bit(QUEUE_FLAG_STOPPED, &(q)->queue_flags)
+#define blk_queue_dead(q)	test_bit(QUEUE_FLAG_DEAD, &(q)->queue_flags)
 #define blk_queue_nomerges(q)	test_bit(QUEUE_FLAG_NOMERGES, &(q)->queue_flags)
 #define blk_queue_noxmerges(q)	\
 	test_bit(QUEUE_FLAG_NOXMERGES, &(q)->queue_flags)
-- 
1.7.3.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 5/6] block: fix drain_all condition in blk_drain_queue()
  2011-10-21  3:56 [PATCHSET block:for-3.2/core] further updates to blk_cleanup_queue() Tejun Heo
                   ` (3 preceding siblings ...)
  2011-10-21  3:56 ` [PATCH 4/6] block: add blk_queue_dead() Tejun Heo
@ 2011-10-21  3:56 ` Tejun Heo
  2011-10-21  3:56 ` [PATCH 6/6] block: add missing blk_queue_dead() checks Tejun Heo
  5 siblings, 0 replies; 14+ messages in thread
From: Tejun Heo @ 2011-10-21  3:56 UTC (permalink / raw)
  To: axboe, vgoyal, jgarzik, davem; +Cc: linux-kernel, ctalbott, rni, Tejun Heo

When trying to drain all requests, blk_drain_queue() checked only
q->rq.count[]; however, this only tracks REQ_ALLOCED requests.  This
patch updates blk_drain_queue() such that it looks at all the counters
and queues so that request_queue is actually empty on completion.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
---
 block/blk-core.c |   24 ++++++++++++++++++------
 1 files changed, 18 insertions(+), 6 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index abf751c..53e36c6 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -358,7 +358,8 @@ EXPORT_SYMBOL(blk_put_queue);
 void blk_drain_queue(struct request_queue *q, bool drain_all)
 {
 	while (true) {
-		int nr_rqs;
+		bool drain = false;
+		int i;
 
 		spin_lock_irq(q->queue_lock);
 
@@ -368,14 +369,25 @@ void blk_drain_queue(struct request_queue *q, bool drain_all)
 
 		__blk_run_queue(q);
 
-		if (drain_all)
-			nr_rqs = q->rq.count[0] + q->rq.count[1];
-		else
-			nr_rqs = q->rq.elvpriv;
+		drain |= q->rq.elvpriv;
+
+		/*
+		 * Unfortunately, requests are queued at and tracked from
+		 * multiple places and there's no single counter which can
+		 * be drained.  Check all the queues and counters.
+		 */
+		if (drain_all) {
+			drain |= !list_empty(&q->queue_head);
+			for (i = 0; i < 2; i++) {
+				drain |= q->rq.count[i];
+				drain |= q->in_flight[i];
+				drain |= !list_empty(&q->flush_queue[i]);
+			}
+		}
 
 		spin_unlock_irq(q->queue_lock);
 
-		if (!nr_rqs)
+		if (!drain)
 			break;
 		msleep(10);
 	}
-- 
1.7.3.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 6/6] block: add missing blk_queue_dead() checks
  2011-10-21  3:56 [PATCHSET block:for-3.2/core] further updates to blk_cleanup_queue() Tejun Heo
                   ` (4 preceding siblings ...)
  2011-10-21  3:56 ` [PATCH 5/6] block: fix drain_all condition in blk_drain_queue() Tejun Heo
@ 2011-10-21  3:56 ` Tejun Heo
  5 siblings, 0 replies; 14+ messages in thread
From: Tejun Heo @ 2011-10-21  3:56 UTC (permalink / raw)
  To: axboe, vgoyal, jgarzik, davem; +Cc: linux-kernel, ctalbott, rni, Tejun Heo

blk_insert_cloned_request(), blk_execute_rq_nowait() and
blk_flush_plug_list() either didn't check whether the queue was dead
or did it without holding queue_lock.  Update them so that dead state
is checked while holding queue_lock.

AFAICS, this plugs all holes (requeue doesn't matter as the request is
transitioning atomically from in_flight to queued).

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
---
 block/blk-core.c |   21 +++++++++++++++++++++
 block/blk-exec.c |    5 ++++-
 2 files changed, 25 insertions(+), 1 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 53e36c6..c195298 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1724,6 +1724,10 @@ int blk_insert_cloned_request(struct request_queue *q, struct request *rq)
 		return -EIO;
 
 	spin_lock_irqsave(q->queue_lock, flags);
+	if (unlikely(blk_queue_dead(q))) {
+		spin_unlock_irqrestore(q->queue_lock, flags);
+		return -ENODEV;
+	}
 
 	/*
 	 * Submitting request must be dequeued before calling this function
@@ -2696,6 +2700,14 @@ static void queue_unplugged(struct request_queue *q, unsigned int depth,
 	trace_block_unplug(q, depth, !from_schedule);
 
 	/*
+	 * Don't mess with dead queue.
+	 */
+	if (unlikely(blk_queue_dead(q))) {
+		spin_unlock(q->queue_lock);
+		return;
+	}
+
+	/*
 	 * If we are punting this to kblockd, then we can safely drop
 	 * the queue_lock before waking kblockd (which needs to take
 	 * this lock).
@@ -2771,6 +2783,15 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
 			depth = 0;
 			spin_lock(q->queue_lock);
 		}
+
+		/*
+		 * Short-circuit if @q is dead
+		 */
+		if (unlikely(blk_queue_dead(q))) {
+			__blk_end_request_all(rq, -ENODEV);
+			continue;
+		}
+
 		/*
 		 * rq is already accounted, so use raw insert
 		 */
diff --git a/block/blk-exec.c b/block/blk-exec.c
index d11c0b8..727abb1 100644
--- a/block/blk-exec.c
+++ b/block/blk-exec.c
@@ -51,7 +51,10 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
 	int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
 	unsigned long flags;
 
+	spin_lock_irqsave(q->queue_lock, flags);
+
 	if (unlikely(blk_queue_dead(q))) {
+		spin_unlock_irqrestore(q->queue_lock, flags);
 		rq->errors = -ENXIO;
 		if (rq->end_io)
 			rq->end_io(rq, rq->errors);
@@ -60,12 +63,12 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
 
 	rq->rq_disk = bd_disk;
 	rq->end_io = done;
-	spin_lock_irqsave(q->queue_lock, flags);
 	elv_add_request(q, rq, where);
 	blk_run_queue_async(q);
 	/* the queue is stopped so it won't be run */
 	if (rq->cmd_type == REQ_TYPE_PM_RESUME)
 		q->request_fn(q);
+
 	spin_unlock_irqrestore(q->queue_lock, flags);
 }
 EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
-- 
1.7.3.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/6] block, sx8: kill blk_insert_request()
  2011-10-21  3:56 ` [PATCH 1/6] block, sx8: kill blk_insert_request() Tejun Heo
@ 2011-10-21  4:03   ` Jeff Garzik
  0 siblings, 0 replies; 14+ messages in thread
From: Jeff Garzik @ 2011-10-21  4:03 UTC (permalink / raw)
  To: Tejun Heo; +Cc: axboe, vgoyal, davem, linux-kernel, ctalbott, rni

On 10/20/2011 11:56 PM, Tejun Heo wrote:
> The only user left for blk_insert_request() is sx8 and it can be
> trivially switched to use blk_execute_rq_nowait() - special requests
> aren't included in io stat and sx8 doesn't use block layer tagging.
> Switch sx8 and kill blk_insert_requeset().
>
> This patch doesn't introduce any functional difference.
>
> Only compile tested.
>
> Signed-off-by: Tejun Heo<tj@kernel.org>
> Cc: Jeff Garzik<jgarzik@pobox.com>
> Cc: Jens Axboe<axboe@kernel.dk>

Acked-by: Jeff Garzik <jgarzik@redhat.com>




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/6] block: allow blk_execute_rq_nowait() to be called form IRQ context
  2011-10-21  3:56 ` [PATCH 2/6] block: allow blk_execute_rq_nowait() to be called form IRQ context Tejun Heo
@ 2011-10-21  9:20   ` Christoph Hellwig
  2011-10-21 18:13     ` Tejun Heo
  0 siblings, 1 reply; 14+ messages in thread
From: Christoph Hellwig @ 2011-10-21  9:20 UTC (permalink / raw)
  To: Tejun Heo; +Cc: axboe, vgoyal, jgarzik, davem, linux-kernel, ctalbott, rni

On Thu, Oct 20, 2011 at 08:56:36PM -0700, Tejun Heo wrote:
> Currently blk_execute_rq_nowait() directly calls __blk_run_queue() and
> thus can't be called from IRQ context.  This patch updates it to use
> blk_run_queue_async() instead.  This will be used to unexport
> elv_add_request().
> 
> This changes how queue is kicked after blk_execute_rq_nowait() but
> it's hardly a hot path and the effect shouldn't be noticeable.

It actually very much is a fasthpath for many of it's users, e.g. the
SCSI tape drivers, the OSD layer and the target scsi passthrough
backend.

I don't think blindly adding a context switch here without benchmarking
is doable.  Just add variants that do the workqueue dance or not.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/6] block: allow blk_execute_rq_nowait() to be called form IRQ context
  2011-10-21  9:20   ` Christoph Hellwig
@ 2011-10-21 18:13     ` Tejun Heo
  0 siblings, 0 replies; 14+ messages in thread
From: Tejun Heo @ 2011-10-21 18:13 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, vgoyal, jgarzik, davem, linux-kernel, ctalbott, rni

Hello,

On Fri, Oct 21, 2011 at 05:20:16AM -0400, Christoph Hellwig wrote:
> On Thu, Oct 20, 2011 at 08:56:36PM -0700, Tejun Heo wrote:
> > Currently blk_execute_rq_nowait() directly calls __blk_run_queue() and
> > thus can't be called from IRQ context.  This patch updates it to use
> > blk_run_queue_async() instead.  This will be used to unexport
> > elv_add_request().
> > 
> > This changes how queue is kicked after blk_execute_rq_nowait() but
> > it's hardly a hot path and the effect shouldn't be noticeable.
> 
> It actually very much is a fasthpath for many of it's users, e.g. the
> SCSI tape drivers, the OSD layer and the target scsi passthrough
> backend.
> 
> I don't think blindly adding a context switch here without benchmarking
> is doable.  Just add variants that do the workqueue dance or not.

Hmm... I'd really like to keep that detail inside block layer.  How
about something like the following?

Thanks.

>From b6954535fe7a585a97e2ce3955569981b833e4db Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Fri, 21 Oct 2011 11:07:58 -0700
Subject: [PATCH 2/6] block: allow blk_execute_rq_nowait() to be called form IRQ context

Currently blk_execute_rq_nowait() directly calls __blk_run_queue() and
thus must be called from sleepable context.  This patch updates the
function such that it can be called from non-sleepable context and
schedules async execution in such cases.  This will be used to
unexport elv_add_request().

While at it, add FIXME comment for REQ_TYPE_PM_RESUME special case.

-v2: hch pointed out that blk_execute_rq_nowait() can be hot path for
     some drivers.  Retained direct execution from sleepable context.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Christoph Hellwig <hch@infradead.org>
---
 block/blk-exec.c |   29 ++++++++++++++++++++++-------
 1 files changed, 22 insertions(+), 7 deletions(-)

diff --git a/block/blk-exec.c b/block/blk-exec.c
index a1ebceb..b686f2b 100644
--- a/block/blk-exec.c
+++ b/block/blk-exec.c
@@ -49,6 +49,8 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
 			   rq_end_io_fn *done)
 {
 	int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
+	bool may_sleep = !preempt_count() && !irqs_disabled();
+	unsigned long flags;
 
 	if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
 		rq->errors = -ENXIO;
@@ -59,14 +61,27 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
 
 	rq->rq_disk = bd_disk;
 	rq->end_io = done;
-	WARN_ON(irqs_disabled());
-	spin_lock_irq(q->queue_lock);
+
+	spin_lock_irqsave(q->queue_lock, flags);
 	__elv_add_request(q, rq, where);
-	__blk_run_queue(q);
-	/* the queue is stopped so it won't be run */
-	if (rq->cmd_type == REQ_TYPE_PM_RESUME)
-		q->request_fn(q);
-	spin_unlock_irq(q->queue_lock);
+
+	/*
+	 * Some drivers beat this path pretty hard.  As an optimization, if
+	 * we're being called from sleepable context, run @q directly.
+	 */
+	if (may_sleep) {
+		__blk_run_queue(q);
+		/*
+		 * The queue is stopped so it won't be run.
+		 * FIXME: Please kill me along with REQ_TYPE_PM_RESUME.
+		 */
+		if (rq->cmd_type == REQ_TYPE_PM_RESUME)
+			q->request_fn(q);
+	} else {
+		blk_run_queue_async(q);
+	}
+
+	spin_unlock_irqrestore(q->queue_lock, flags);
 }
 EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
 
-- 
1.7.3.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/6] block: add blk_queue_dead()
  2011-10-26 17:20   ` Vivek Goyal
@ 2011-10-26 19:25     ` Tejun Heo
  0 siblings, 0 replies; 14+ messages in thread
From: Tejun Heo @ 2011-10-26 19:25 UTC (permalink / raw)
  To: Vivek Goyal; +Cc: axboe, jgarzik, davem, hch, ctalbott, rni, linux-kernel

Hello,

On Wed, Oct 26, 2011 at 01:20:49PM -0400, Vivek Goyal wrote:
> On Tue, Oct 25, 2011 at 06:02:05PM -0700, Tejun Heo wrote:
> > @@ -603,7 +603,7 @@ EXPORT_SYMBOL(blk_init_allocated_queue_node);
> >  
> >  int blk_get_queue(struct request_queue *q)
> >  {
> > -	if (likely(!test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
> > +	if (likely(!blk_queue_dead(q))) {
> >  		kobject_get(&q->kobj);
> >  		return 0;
> 
> I thought DEAD flag is now synchronized with queue lock. So the protocol
> is that caller should be holding queue lock here first?

The requirement is that issue and processing of requests don't happen
once DEAD is set and to guarantee that it's necessary to set DEAD and
check DEAD in rq alloc/issue paths.

blk_get_queue() is inherently opportunistic as holding a reference
doesn't prevent it from being killed.  It doesn't make any sense to
require its holder to grab spinlock - either the caller doesn't need a
refcnt (as it's holding spinlock) or the result of the check becomes
stale as soon as it drops the spinlock.  Whether testing DEAD is
helpful (it isn't necessary per-se) is another question tho.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/6] block: add blk_queue_dead()
  2011-10-26  1:02 ` [PATCH 4/6] block: add blk_queue_dead() Tejun Heo
  2011-10-26  8:18   ` Jens Axboe
@ 2011-10-26 17:20   ` Vivek Goyal
  2011-10-26 19:25     ` Tejun Heo
  1 sibling, 1 reply; 14+ messages in thread
From: Vivek Goyal @ 2011-10-26 17:20 UTC (permalink / raw)
  To: Tejun Heo; +Cc: axboe, jgarzik, davem, hch, ctalbott, rni, linux-kernel

On Tue, Oct 25, 2011 at 06:02:05PM -0700, Tejun Heo wrote:
> There are a number of QUEUE_FLAG_DEAD tests.  Add blk_queue_dead()
> macro and use it.
> 
> This patch doesn't introduce any functional difference.
> 
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Cc: Jens Axboe <axboe@kernel.dk>
> ---
>  block/blk-core.c       |    6 +++---
>  block/blk-exec.c       |    2 +-
>  block/blk-sysfs.c      |    4 ++--
>  block/blk-throttle.c   |    4 ++--
>  block/blk.h            |    2 +-
>  include/linux/blkdev.h |    1 +
>  6 files changed, 10 insertions(+), 9 deletions(-)
> 
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 1efb943..7d39897 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -603,7 +603,7 @@ EXPORT_SYMBOL(blk_init_allocated_queue_node);
>  
>  int blk_get_queue(struct request_queue *q)
>  {
> -	if (likely(!test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
> +	if (likely(!blk_queue_dead(q))) {
>  		kobject_get(&q->kobj);
>  		return 0;

I thought DEAD flag is now synchronized with queue lock. So the protocol
is that caller should be holding queue lock here first?

Thanks
Vivek

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/6] block: add blk_queue_dead()
  2011-10-26  1:02 ` [PATCH 4/6] block: add blk_queue_dead() Tejun Heo
@ 2011-10-26  8:18   ` Jens Axboe
  2011-10-26 17:20   ` Vivek Goyal
  1 sibling, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2011-10-26  8:18 UTC (permalink / raw)
  To: Tejun Heo; +Cc: vgoyal, jgarzik, davem, hch, ctalbott, rni, linux-kernel

On 2011-10-26 03:02, Tejun Heo wrote:
> There are a number of QUEUE_FLAG_DEAD tests.  Add blk_queue_dead()
> macro and use it.
> 
> This patch doesn't introduce any functional difference.

Applied 4-6 as well, thanks!

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 4/6] block: add blk_queue_dead()
  2011-10-26  1:02 [PATCHSET block:for-3.2/core] further updates to blk_cleanup_queue(), take#2 Tejun Heo
@ 2011-10-26  1:02 ` Tejun Heo
  2011-10-26  8:18   ` Jens Axboe
  2011-10-26 17:20   ` Vivek Goyal
  0 siblings, 2 replies; 14+ messages in thread
From: Tejun Heo @ 2011-10-26  1:02 UTC (permalink / raw)
  To: axboe, vgoyal, jgarzik, davem, hch; +Cc: ctalbott, rni, linux-kernel, Tejun Heo

There are a number of QUEUE_FLAG_DEAD tests.  Add blk_queue_dead()
macro and use it.

This patch doesn't introduce any functional difference.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
---
 block/blk-core.c       |    6 +++---
 block/blk-exec.c       |    2 +-
 block/blk-sysfs.c      |    4 ++--
 block/blk-throttle.c   |    4 ++--
 block/blk.h            |    2 +-
 include/linux/blkdev.h |    1 +
 6 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 1efb943..7d39897 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -603,7 +603,7 @@ EXPORT_SYMBOL(blk_init_allocated_queue_node);
 
 int blk_get_queue(struct request_queue *q)
 {
-	if (likely(!test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
+	if (likely(!blk_queue_dead(q))) {
 		kobject_get(&q->kobj);
 		return 0;
 	}
@@ -750,7 +750,7 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
 	const bool is_sync = rw_is_sync(rw_flags) != 0;
 	int may_queue;
 
-	if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)))
+	if (unlikely(blk_queue_dead(q)))
 		return NULL;
 
 	may_queue = elv_may_queue(q, rw_flags);
@@ -870,7 +870,7 @@ static struct request *get_request_wait(struct request_queue *q, int rw_flags,
 		struct io_context *ioc;
 		struct request_list *rl = &q->rq;
 
-		if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)))
+		if (unlikely(blk_queue_dead(q)))
 			return NULL;
 
 		prepare_to_wait_exclusive(&rl->wait[is_sync], &wait,
diff --git a/block/blk-exec.c b/block/blk-exec.c
index 951eda7..8716557 100644
--- a/block/blk-exec.c
+++ b/block/blk-exec.c
@@ -52,7 +52,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
 	bool may_sleep = !preempt_count() && !irqs_disabled();
 	unsigned long flags;
 
-	if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
+	if (unlikely(blk_queue_dead(q))) {
 		rq->errors = -ENXIO;
 		if (rq->end_io)
 			rq->end_io(rq, rq->errors);
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index e7f9f65..f0b2ca8 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -425,7 +425,7 @@ queue_attr_show(struct kobject *kobj, struct attribute *attr, char *page)
 	if (!entry->show)
 		return -EIO;
 	mutex_lock(&q->sysfs_lock);
-	if (test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)) {
+	if (blk_queue_dead(q)) {
 		mutex_unlock(&q->sysfs_lock);
 		return -ENOENT;
 	}
@@ -447,7 +447,7 @@ queue_attr_store(struct kobject *kobj, struct attribute *attr,
 
 	q = container_of(kobj, struct request_queue, kobj);
 	mutex_lock(&q->sysfs_lock);
-	if (test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)) {
+	if (blk_queue_dead(q)) {
 		mutex_unlock(&q->sysfs_lock);
 		return -ENOENT;
 	}
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 4553245..5eed6a7 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -310,7 +310,7 @@ static struct throtl_grp * throtl_get_tg(struct throtl_data *td)
 	struct request_queue *q = td->queue;
 
 	/* no throttling for dead queue */
-	if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)))
+	if (unlikely(blk_queue_dead(q)))
 		return NULL;
 
 	rcu_read_lock();
@@ -335,7 +335,7 @@ static struct throtl_grp * throtl_get_tg(struct throtl_data *td)
 	spin_lock_irq(q->queue_lock);
 
 	/* Make sure @q is still alive */
-	if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
+	if (unlikely(blk_queue_dead(q))) {
 		kfree(tg);
 		return NULL;
 	}
diff --git a/block/blk.h b/block/blk.h
index 77ad885..f8c797c 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -86,7 +86,7 @@ static inline struct request *__elv_next_request(struct request_queue *q)
 			q->flush_queue_delayed = 1;
 			return NULL;
 		}
-		if (test_bit(QUEUE_FLAG_DEAD, &q->queue_flags) ||
+		if (unlikely(blk_queue_dead(q)) ||
 		    !q->elevator->ops->elevator_dispatch_fn(q, 0))
 			return NULL;
 	}
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 45563b0..6a4ab84 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -481,6 +481,7 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q)
 
 #define blk_queue_tagged(q)	test_bit(QUEUE_FLAG_QUEUED, &(q)->queue_flags)
 #define blk_queue_stopped(q)	test_bit(QUEUE_FLAG_STOPPED, &(q)->queue_flags)
+#define blk_queue_dead(q)	test_bit(QUEUE_FLAG_DEAD, &(q)->queue_flags)
 #define blk_queue_nomerges(q)	test_bit(QUEUE_FLAG_NOMERGES, &(q)->queue_flags)
 #define blk_queue_noxmerges(q)	\
 	test_bit(QUEUE_FLAG_NOXMERGES, &(q)->queue_flags)
-- 
1.7.3.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2011-10-26 19:25 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-10-21  3:56 [PATCHSET block:for-3.2/core] further updates to blk_cleanup_queue() Tejun Heo
2011-10-21  3:56 ` [PATCH 1/6] block, sx8: kill blk_insert_request() Tejun Heo
2011-10-21  4:03   ` Jeff Garzik
2011-10-21  3:56 ` [PATCH 2/6] block: allow blk_execute_rq_nowait() to be called form IRQ context Tejun Heo
2011-10-21  9:20   ` Christoph Hellwig
2011-10-21 18:13     ` Tejun Heo
2011-10-21  3:56 ` [PATCH 3/6] block, ide: unexport elv_add_request() Tejun Heo
2011-10-21  3:56 ` [PATCH 4/6] block: add blk_queue_dead() Tejun Heo
2011-10-21  3:56 ` [PATCH 5/6] block: fix drain_all condition in blk_drain_queue() Tejun Heo
2011-10-21  3:56 ` [PATCH 6/6] block: add missing blk_queue_dead() checks Tejun Heo
2011-10-26  1:02 [PATCHSET block:for-3.2/core] further updates to blk_cleanup_queue(), take#2 Tejun Heo
2011-10-26  1:02 ` [PATCH 4/6] block: add blk_queue_dead() Tejun Heo
2011-10-26  8:18   ` Jens Axboe
2011-10-26 17:20   ` Vivek Goyal
2011-10-26 19:25     ` Tejun Heo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.