All of lore.kernel.org
 help / color / mirror / Atom feed
* [GIT PATCH] block: cleanup patches, take#2
@ 2009-03-16  2:28 Tejun Heo
  2009-03-16  2:28 ` [PATCH 01/17] ide: use blk_run_queue() instead of blk_start_queueing() Tejun Heo
                   ` (17 more replies)
  0 siblings, 18 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier

Hello,

This patchset is available in the following git tree.

 git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git block-cleanup

This patchset contains the following 17 cleanup patches.

 0001-ide-use-blk_run_queue-instead-of-blk_start_queuei.patch
 0002-ide-don-t-set-REQ_SOFTBARRIER.patch
 0003-ide-use-blk_update_request-instead-of-blk_end_req.patch
 0004-block-merge-blk_invoke_request_fn-into-__blk_run_.patch
 0005-block-kill-blk_start_queueing.patch
 0006-block-don-t-set-REQ_NOMERGE-unnecessarily.patch
 0007-block-cleanup-REQ_SOFTBARRIER-usages.patch
 0008-block-clean-up-misc-stuff-after-block-layer-timeout.patch
 0009-block-reorder-request-completion-functions.patch
 0010-block-reorganize-request-fetching-functions.patch
 0011-block-kill-blk_end_request_callback.patch
 0012-block-clean-up-request-completion-API.patch
 0013-block-move-rq-start_time-initialization-to-blk_rq_.patch
 0014-block-implement-and-use-__-blk_end_request_all.patch
 0015-block-kill-end_request.patch
 0016-ubd-simplify-block-request-completion.patch
 0017-block-clean-up-unnecessary-stuff-from-block-drivers.patch

It's on top of the current linux-2.6-block/for-2.6.30[1].  Changes
from the last take[2] are.

* IDE changes separated out to 0001-0003
* IDE end_all conversion dropped

Bartlomiej, 0001-0003 are mostly trivial and shouldn't cause too much
merge headaches later.  Can these go through block tree?  I'll base
further IDE changes on top of linux-next/pata-2.6 patchset (whatever
the merge strategy would be).

Thanks.

--
tejun

[1] 6319ec3182b26abecd2fa9ab97c945f0161d4e36
[2] http://thread.gmane.org/gmane.linux.kernel/806280

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 01/17] ide: use blk_run_queue() instead of blk_start_queueing()
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
@ 2009-03-16  2:28 ` Tejun Heo
  2009-03-16  2:28 ` [PATCH 02/17] ide: don't set REQ_SOFTBARRIER Tejun Heo
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier; +Cc: Tejun Heo

blk_start_queueing() is being phased out in favor of
[__]blk_run_queue().  Switch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
---
 drivers/ide/ide-park.c |    7 ++-----
 1 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/ide/ide-park.c b/drivers/ide/ide-park.c
index c875a95..5f121f7 100644
--- a/drivers/ide/ide-park.c
+++ b/drivers/ide/ide-park.c
@@ -24,11 +24,8 @@ static void issue_park_cmd(ide_drive_t *drive, unsigned long timeout)
 			start_queue = 1;
 		spin_unlock_irq(&hwif->lock);
 
-		if (start_queue) {
-			spin_lock_irq(q->queue_lock);
-			blk_start_queueing(q);
-			spin_unlock_irq(q->queue_lock);
-		}
+		if (start_queue)
+			blk_run_queue(q);
 		return;
 	}
 	spin_unlock_irq(&hwif->lock);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 02/17] ide: don't set REQ_SOFTBARRIER
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
  2009-03-16  2:28 ` [PATCH 01/17] ide: use blk_run_queue() instead of blk_start_queueing() Tejun Heo
@ 2009-03-16  2:28 ` Tejun Heo
  2009-03-16  2:28 ` [PATCH 03/17] ide: use blk_update_request() instead of blk_end_request_callback() Tejun Heo
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier; +Cc: Tejun Heo

ide doesn't have to worry about REQ_SOFTBARRIER.  Don't set it.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
---
 drivers/ide/ide-disk.c   |    1 -
 drivers/ide/ide-ioctls.c |    1 -
 2 files changed, 0 insertions(+), 2 deletions(-)

diff --git a/drivers/ide/ide-disk.c b/drivers/ide/ide-disk.c
index 806760d..0cd287d 100644
--- a/drivers/ide/ide-disk.c
+++ b/drivers/ide/ide-disk.c
@@ -405,7 +405,6 @@ static void idedisk_prepare_flush(struct request_queue *q, struct request *rq)
 	task->data_phase = TASKFILE_NO_DATA;
 
 	rq->cmd_type = REQ_TYPE_ATA_TASKFILE;
-	rq->cmd_flags |= REQ_SOFTBARRIER;
 	rq->special = task;
 }
 
diff --git a/drivers/ide/ide-ioctls.c b/drivers/ide/ide-ioctls.c
index 1be263e..d440fbb 100644
--- a/drivers/ide/ide-ioctls.c
+++ b/drivers/ide/ide-ioctls.c
@@ -229,7 +229,6 @@ static int generic_drive_reset(ide_drive_t *drive)
 	rq->cmd_type = REQ_TYPE_SPECIAL;
 	rq->cmd_len = 1;
 	rq->cmd[0] = REQ_DRIVE_RESET;
-	rq->cmd_flags |= REQ_SOFTBARRIER;
 	if (blk_execute_rq(drive->queue, NULL, rq, 1))
 		ret = rq->errors;
 	blk_put_request(rq);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 03/17] ide: use blk_update_request() instead of blk_end_request_callback()
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
  2009-03-16  2:28 ` [PATCH 01/17] ide: use blk_run_queue() instead of blk_start_queueing() Tejun Heo
  2009-03-16  2:28 ` [PATCH 02/17] ide: don't set REQ_SOFTBARRIER Tejun Heo
@ 2009-03-16  2:28 ` Tejun Heo
  2009-03-16  2:28 ` [PATCH 04/17] block: merge blk_invoke_request_fn() into __blk_run_queue() Tejun Heo
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier; +Cc: Tejun Heo

blk_end_request_callback() is being phased out in favor of
blk_update_request().  Switch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
---
 drivers/ide/ide-cd.c |   16 ++--------------
 1 files changed, 2 insertions(+), 14 deletions(-)

diff --git a/drivers/ide/ide-cd.c b/drivers/ide/ide-cd.c
index ddfbea4..f825d50 100644
--- a/drivers/ide/ide-cd.c
+++ b/drivers/ide/ide-cd.c
@@ -748,16 +748,6 @@ int ide_cd_queue_pc(ide_drive_t *drive, const unsigned char *cmd,
 	return (flags & REQ_FAILED) ? -EIO : 0;
 }
 
-/*
- * Called from blk_end_request_callback() after the data of the request is
- * completed and before the request itself is completed. By returning value '1',
- * blk_end_request_callback() returns immediately without completing it.
- */
-static int cdrom_newpc_intr_dummy_cb(struct request *rq)
-{
-	return 1;
-}
-
 static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
 {
 	ide_hwif_t *hwif = drive->hwif;
@@ -932,12 +922,10 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
 			/*
 			 * The request can't be completed until DRQ is cleared.
 			 * So complete the data, but don't complete the request
-			 * using the dummy function for the callback feature
-			 * of blk_end_request_callback().
+			 * using blk_update_request().
 			 */
 			if (rq->bio)
-				blk_end_request_callback(rq, 0, blen,
-						 cdrom_newpc_intr_dummy_cb);
+				blk_update_request(rq, 0, blen);
 			else
 				rq->data += blen;
 		}
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 04/17] block: merge blk_invoke_request_fn() into __blk_run_queue()
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
                   ` (2 preceding siblings ...)
  2009-03-16  2:28 ` [PATCH 03/17] ide: use blk_update_request() instead of blk_end_request_callback() Tejun Heo
@ 2009-03-16  2:28 ` Tejun Heo
  2009-03-16  2:28 ` [PATCH 05/17] block: kill blk_start_queueing() Tejun Heo
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier; +Cc: Tejun Heo

Impact: merge two subtly different internal functions

__blk_run_queue wraps blk_invoke_request_fn() such that it
additionally removes plug and bails out early if the queue is empty.
Both extra operations have their own pending mechanisms and don't
cause any harm correctness-wise when they are done superflously.

The only user of blk_invoke_request_fn() being blk_start_queue(),
there isn't much reason to keep both functions around.  Merge
blk_invoke_request_fn() into __blk_run_queue() and make
blk_start_queue() use __blk_run_queue() instead.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 block/blk-core.c |   35 ++++++++++++++---------------------
 1 files changed, 14 insertions(+), 21 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 7b63c9b..95dc76f 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -333,24 +333,6 @@ void blk_unplug(struct request_queue *q)
 }
 EXPORT_SYMBOL(blk_unplug);
 
-static void blk_invoke_request_fn(struct request_queue *q)
-{
-	if (unlikely(blk_queue_stopped(q)))
-		return;
-
-	/*
-	 * one level of recursion is ok and is much faster than kicking
-	 * the unplug handling
-	 */
-	if (!queue_flag_test_and_set(QUEUE_FLAG_REENTER, q)) {
-		q->request_fn(q);
-		queue_flag_clear(QUEUE_FLAG_REENTER, q);
-	} else {
-		queue_flag_set(QUEUE_FLAG_PLUGGED, q);
-		kblockd_schedule_work(q, &q->unplug_work);
-	}
-}
-
 /**
  * blk_start_queue - restart a previously stopped queue
  * @q:    The &struct request_queue in question
@@ -365,7 +347,7 @@ void blk_start_queue(struct request_queue *q)
 	WARN_ON(!irqs_disabled());
 
 	queue_flag_clear(QUEUE_FLAG_STOPPED, q);
-	blk_invoke_request_fn(q);
+	__blk_run_queue(q);
 }
 EXPORT_SYMBOL(blk_start_queue);
 
@@ -425,12 +407,23 @@ void __blk_run_queue(struct request_queue *q)
 {
 	blk_remove_plug(q);
 
+	if (unlikely(blk_queue_stopped(q)))
+		return;
+
+	if (elv_queue_empty(q))
+		return;
+
 	/*
 	 * Only recurse once to avoid overrunning the stack, let the unplug
 	 * handling reinvoke the handler shortly if we already got there.
 	 */
-	if (!elv_queue_empty(q))
-		blk_invoke_request_fn(q);
+	if (!queue_flag_test_and_set(QUEUE_FLAG_REENTER, q)) {
+		q->request_fn(q);
+		queue_flag_clear(QUEUE_FLAG_REENTER, q);
+	} else {
+		queue_flag_set(QUEUE_FLAG_PLUGGED, q);
+		kblockd_schedule_work(q, &q->unplug_work);
+	}
 }
 EXPORT_SYMBOL(__blk_run_queue);
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 05/17] block: kill blk_start_queueing()
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
                   ` (3 preceding siblings ...)
  2009-03-16  2:28 ` [PATCH 04/17] block: merge blk_invoke_request_fn() into __blk_run_queue() Tejun Heo
@ 2009-03-16  2:28 ` Tejun Heo
  2009-03-16  2:28 ` [PATCH 06/17] block: don't set REQ_NOMERGE unnecessarily Tejun Heo
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier; +Cc: Tejun Heo

Impact: removal of mostly duplicate interface function

blk_start_queueing() is identical to __blk_run_queue() except that it
doesn't check for recursion.  None of the current users depends on
blk_start_queueing() running request_fn directly.  Replace usages of
blk_start_queueing() with [__]blk_run_queue() and kill it.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 block/as-iosched.c     |    6 +-----
 block/blk-core.c       |   28 ++--------------------------
 block/cfq-iosched.c    |   10 +++-------
 block/elevator.c       |    7 +++----
 include/linux/blkdev.h |    1 -
 5 files changed, 9 insertions(+), 43 deletions(-)

diff --git a/block/as-iosched.c b/block/as-iosched.c
index 631f6f4..da8e272 100644
--- a/block/as-iosched.c
+++ b/block/as-iosched.c
@@ -1315,12 +1315,8 @@ static void as_merged_requests(struct request_queue *q, struct request *req,
 static void as_work_handler(struct work_struct *work)
 {
 	struct as_data *ad = container_of(work, struct as_data, antic_work);
-	struct request_queue *q = ad->q;
-	unsigned long flags;
 
-	spin_lock_irqsave(q->queue_lock, flags);
-	blk_start_queueing(q);
-	spin_unlock_irqrestore(q->queue_lock, flags);
+	blk_run_queue(ad->q);
 }
 
 static int as_may_queue(struct request_queue *q, int rw)
diff --git a/block/blk-core.c b/block/blk-core.c
index 95dc76f..7c2d836 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -433,9 +433,7 @@ EXPORT_SYMBOL(__blk_run_queue);
  *
  * Description:
  *    Invoke request handling on this queue, if it has pending work to do.
- *    May be used to restart queueing when a request has completed. Also
- *    See @blk_start_queueing.
- *
+ *    May be used to restart queueing when a request has completed.
  */
 void blk_run_queue(struct request_queue *q)
 {
@@ -893,28 +891,6 @@ struct request *blk_get_request(struct request_queue *q, int rw, gfp_t gfp_mask)
 EXPORT_SYMBOL(blk_get_request);
 
 /**
- * blk_start_queueing - initiate dispatch of requests to device
- * @q:		request queue to kick into gear
- *
- * This is basically a helper to remove the need to know whether a queue
- * is plugged or not if someone just wants to initiate dispatch of requests
- * for this queue. Should be used to start queueing on a device outside
- * of ->request_fn() context. Also see @blk_run_queue.
- *
- * The queue lock must be held with interrupts disabled.
- */
-void blk_start_queueing(struct request_queue *q)
-{
-	if (!blk_queue_plugged(q)) {
-		if (unlikely(blk_queue_stopped(q)))
-			return;
-		q->request_fn(q);
-	} else
-		__generic_unplug_device(q);
-}
-EXPORT_SYMBOL(blk_start_queueing);
-
-/**
  * blk_requeue_request - put a request back on queue
  * @q:		request queue where request should be inserted
  * @rq:		request to be inserted
@@ -982,7 +958,7 @@ void blk_insert_request(struct request_queue *q, struct request *rq,
 
 	drive_stat_acct(rq, 1);
 	__elv_add_request(q, rq, where, 0);
-	blk_start_queueing(q);
+	__blk_run_queue(q);
 	spin_unlock_irqrestore(q->queue_lock, flags);
 }
 EXPORT_SYMBOL(blk_insert_request);
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 664ebfd..8190db8 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1900,7 +1900,7 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
 		if (cfq_cfqq_wait_request(cfqq)) {
 			cfq_mark_cfqq_must_dispatch(cfqq);
 			del_timer(&cfqd->idle_slice_timer);
-			blk_start_queueing(cfqd->queue);
+			__blk_run_queue(cfqd->queue);
 		}
 	} else if (cfq_should_preempt(cfqd, cfqq, rq)) {
 		/*
@@ -1911,7 +1911,7 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
 		 */
 		cfq_preempt_queue(cfqd, cfqq);
 		cfq_mark_cfqq_must_dispatch(cfqq);
-		blk_start_queueing(cfqd->queue);
+		__blk_run_queue(cfqd->queue);
 	}
 }
 
@@ -2143,12 +2143,8 @@ static void cfq_kick_queue(struct work_struct *work)
 {
 	struct cfq_data *cfqd =
 		container_of(work, struct cfq_data, unplug_work);
-	struct request_queue *q = cfqd->queue;
-	unsigned long flags;
 
-	spin_lock_irqsave(q->queue_lock, flags);
-	blk_start_queueing(q);
-	spin_unlock_irqrestore(q->queue_lock, flags);
+	blk_run_queue(cfqd->queue);
 }
 
 /*
diff --git a/block/elevator.c b/block/elevator.c
index 98259ed..fca4436 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -618,8 +618,7 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
 		 *   with anything.  There's no point in delaying queue
 		 *   processing.
 		 */
-		blk_remove_plug(q);
-		blk_start_queueing(q);
+		__blk_run_queue(q);
 		break;
 
 	case ELEVATOR_INSERT_SORT:
@@ -946,7 +945,7 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
 		    blk_ordered_cur_seq(q) == QUEUE_ORDSEQ_DRAIN &&
 		    (!next || blk_ordered_req_seq(next) > QUEUE_ORDSEQ_DRAIN)) {
 			blk_ordered_complete_seq(q, QUEUE_ORDSEQ_DRAIN, 0);
-			blk_start_queueing(q);
+			__blk_run_queue(q);
 		}
 	}
 }
@@ -1107,7 +1106,7 @@ static int elevator_switch(struct request_queue *q, struct elevator_type *new_e)
 	elv_drain_elevator(q);
 
 	while (q->rq.elvpriv) {
-		blk_start_queueing(q);
+		__blk_run_queue(q);
 		spin_unlock_irq(q->queue_lock);
 		msleep(10);
 		spin_lock_irq(q->queue_lock);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 465d6ba..4c05bb9 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -775,7 +775,6 @@ extern void blk_sync_queue(struct request_queue *q);
 extern void __blk_stop_queue(struct request_queue *q);
 extern void __blk_run_queue(struct request_queue *);
 extern void blk_run_queue(struct request_queue *);
-extern void blk_start_queueing(struct request_queue *);
 extern int blk_rq_map_user(struct request_queue *, struct request *,
 			   struct rq_map_data *, void __user *, unsigned long,
 			   gfp_t);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 06/17] block: don't set REQ_NOMERGE unnecessarily
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
                   ` (4 preceding siblings ...)
  2009-03-16  2:28 ` [PATCH 05/17] block: kill blk_start_queueing() Tejun Heo
@ 2009-03-16  2:28 ` Tejun Heo
  2009-03-16  2:28 ` [PATCH 07/17] block: cleanup REQ_SOFTBARRIER usages Tejun Heo
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier; +Cc: Tejun Heo

Impact: cleanup

RQ_NOMERGE_FLAGS already clears defines which REQ flags aren't
mergeable.  There is no reason to specify it superflously.  It only
adds to confusion.  Don't set REQ_NOMERGE for barriers and requests
with specific queueing directive.  REQ_NOMERGE is now exclusively used
by the merging code.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 block/blk-core.c |    5 +----
 block/blk-exec.c |    1 -
 2 files changed, 1 insertions(+), 5 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 7c2d836..d7b2cc9 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1077,16 +1077,13 @@ void init_request_from_bio(struct request *req, struct bio *bio)
 	if (bio_failfast_driver(bio))
 		req->cmd_flags |= REQ_FAILFAST_DRIVER;
 
-	/*
-	 * REQ_BARRIER implies no merging, but lets make it explicit
-	 */
 	if (unlikely(bio_discard(bio))) {
 		req->cmd_flags |= REQ_DISCARD;
 		if (bio_barrier(bio))
 			req->cmd_flags |= REQ_SOFTBARRIER;
 		req->q->prepare_discard_fn(req->q, req);
 	} else if (unlikely(bio_barrier(bio)))
-		req->cmd_flags |= (REQ_HARDBARRIER | REQ_NOMERGE);
+		req->cmd_flags |= REQ_HARDBARRIER;
 
 	if (bio_sync(bio))
 		req->cmd_flags |= REQ_RW_SYNC;
diff --git a/block/blk-exec.c b/block/blk-exec.c
index 6af716d..49557e9 100644
--- a/block/blk-exec.c
+++ b/block/blk-exec.c
@@ -51,7 +51,6 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
 	int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
 
 	rq->rq_disk = bd_disk;
-	rq->cmd_flags |= REQ_NOMERGE;
 	rq->end_io = done;
 	WARN_ON(irqs_disabled());
 	spin_lock_irq(q->queue_lock);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 07/17] block: cleanup REQ_SOFTBARRIER usages
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
                   ` (5 preceding siblings ...)
  2009-03-16  2:28 ` [PATCH 06/17] block: don't set REQ_NOMERGE unnecessarily Tejun Heo
@ 2009-03-16  2:28 ` Tejun Heo
  2009-03-16  2:28 ` [PATCH 08/17] block: clean up misc stuff after block layer timeout conversion Tejun Heo
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier; +Cc: Tejun Heo

Impact: cleanup

blk_insert_request() doesn't need to worry about REQ_SOFTBARRIER.
Don't set it.  Combined with recent ide updates, REQ_SOFTBARRIER is
now only used in elevator proper and for discard requests.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 block/blk-core.c |    1 -
 1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index d7b2cc9..9e5f154 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -944,7 +944,6 @@ void blk_insert_request(struct request_queue *q, struct request *rq,
 	 * barrier
 	 */
 	rq->cmd_type = REQ_TYPE_SPECIAL;
-	rq->cmd_flags |= REQ_SOFTBARRIER;
 
 	rq->special = data;
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 08/17] block: clean up misc stuff after block layer timeout conversion
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
                   ` (6 preceding siblings ...)
  2009-03-16  2:28 ` [PATCH 07/17] block: cleanup REQ_SOFTBARRIER usages Tejun Heo
@ 2009-03-16  2:28 ` Tejun Heo
  2009-03-16  2:28 ` [PATCH 09/17] block: reorder request completion functions Tejun Heo
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier; +Cc: Tejun Heo

Impact: cleanup

* In blk_rq_timed_out_timer(), else { if } to else if

* In blk_add_timer(), simplify if/else block

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 block/blk-timeout.c |   22 +++++++++-------------
 1 files changed, 9 insertions(+), 13 deletions(-)

diff --git a/block/blk-timeout.c b/block/blk-timeout.c
index bbbdc4b..0bc3961 100644
--- a/block/blk-timeout.c
+++ b/block/blk-timeout.c
@@ -122,10 +122,8 @@ void blk_rq_timed_out_timer(unsigned long data)
 			if (blk_mark_rq_complete(rq))
 				continue;
 			blk_rq_timed_out(rq);
-		} else {
-			if (!next || time_after(next, rq->deadline))
-				next = rq->deadline;
-		}
+		} else if (!next || time_after(next, rq->deadline))
+			next = rq->deadline;
 	}
 
 	/*
@@ -176,16 +174,14 @@ void blk_add_timer(struct request *req)
 	BUG_ON(!list_empty(&req->timeout_list));
 	BUG_ON(test_bit(REQ_ATOM_COMPLETE, &req->atomic_flags));
 
-	if (req->timeout)
-		req->deadline = jiffies + req->timeout;
-	else {
-		req->deadline = jiffies + q->rq_timeout;
-		/*
-		 * Some LLDs, like scsi, peek at the timeout to prevent
-		 * a command from being retried forever.
-		 */
+	/*
+	 * Some LLDs, like scsi, peek at the timeout to prevent a
+	 * command from being retried forever.
+	 */
+	if (!req->timeout)
 		req->timeout = q->rq_timeout;
-	}
+
+	req->deadline = jiffies + req->timeout;
 	list_add_tail(&req->timeout_list, &q->timeout_list);
 
 	/*
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 09/17] block: reorder request completion functions
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
                   ` (7 preceding siblings ...)
  2009-03-16  2:28 ` [PATCH 08/17] block: clean up misc stuff after block layer timeout conversion Tejun Heo
@ 2009-03-16  2:28 ` Tejun Heo
  2009-03-16  2:28 ` [PATCH 10/17] block: reorganize request fetching functions Tejun Heo
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier; +Cc: Tejun Heo

Impact: cleanup, code reorganization

Reorder request completion functions such that

* All request completion functions are located together.

* Functions which are used by only one caller is put right above the
  caller.

* end_request() is put after other completion functions but before
  blk_update_request().

This change is for completion function cleanup which will follow.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 block/blk-core.c       |  144 ++++++++++++++++++++++++------------------------
 include/linux/blkdev.h |   16 +++---
 2 files changed, 80 insertions(+), 80 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 9e5f154..fd9dec3 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1674,6 +1674,35 @@ static void blk_account_io_done(struct request *req)
 }
 
 /**
+ * blk_rq_bytes - Returns bytes left to complete in the entire request
+ * @rq: the request being processed
+ **/
+unsigned int blk_rq_bytes(struct request *rq)
+{
+	if (blk_fs_request(rq))
+		return rq->hard_nr_sectors << 9;
+
+	return rq->data_len;
+}
+EXPORT_SYMBOL_GPL(blk_rq_bytes);
+
+/**
+ * blk_rq_cur_bytes - Returns bytes left to complete in the current segment
+ * @rq: the request being processed
+ **/
+unsigned int blk_rq_cur_bytes(struct request *rq)
+{
+	if (blk_fs_request(rq))
+		return rq->current_nr_sectors << 9;
+
+	if (rq->bio)
+		return rq->bio->bi_size;
+
+	return rq->data_len;
+}
+EXPORT_SYMBOL_GPL(blk_rq_cur_bytes);
+
+/**
  * __end_that_request_first - end I/O on a request
  * @req:      the request being processed
  * @error:    %0 for success, < %0 for error
@@ -1783,6 +1812,22 @@ static int __end_that_request_first(struct request *req, int error,
 	return 1;
 }
 
+static int end_that_request_data(struct request *rq, int error,
+				 unsigned int nr_bytes, unsigned int bidi_bytes)
+{
+	if (rq->bio) {
+		if (__end_that_request_first(rq, error, nr_bytes))
+			return 1;
+
+		/* Bidi request must be completed as a whole */
+		if (blk_bidi_rq(rq) &&
+		    __end_that_request_first(rq->next_rq, error, bidi_bytes))
+			return 1;
+	}
+
+	return 0;
+}
+
 /*
  * queue lock must be held
  */
@@ -1812,78 +1857,6 @@ static void end_that_request_last(struct request *req, int error)
 }
 
 /**
- * blk_rq_bytes - Returns bytes left to complete in the entire request
- * @rq: the request being processed
- **/
-unsigned int blk_rq_bytes(struct request *rq)
-{
-	if (blk_fs_request(rq))
-		return rq->hard_nr_sectors << 9;
-
-	return rq->data_len;
-}
-EXPORT_SYMBOL_GPL(blk_rq_bytes);
-
-/**
- * blk_rq_cur_bytes - Returns bytes left to complete in the current segment
- * @rq: the request being processed
- **/
-unsigned int blk_rq_cur_bytes(struct request *rq)
-{
-	if (blk_fs_request(rq))
-		return rq->current_nr_sectors << 9;
-
-	if (rq->bio)
-		return rq->bio->bi_size;
-
-	return rq->data_len;
-}
-EXPORT_SYMBOL_GPL(blk_rq_cur_bytes);
-
-/**
- * end_request - end I/O on the current segment of the request
- * @req:	the request being processed
- * @uptodate:	error value or %0/%1 uptodate flag
- *
- * Description:
- *     Ends I/O on the current segment of a request. If that is the only
- *     remaining segment, the request is also completed and freed.
- *
- *     This is a remnant of how older block drivers handled I/O completions.
- *     Modern drivers typically end I/O on the full request in one go, unless
- *     they have a residual value to account for. For that case this function
- *     isn't really useful, unless the residual just happens to be the
- *     full current segment. In other words, don't use this function in new
- *     code. Use blk_end_request() or __blk_end_request() to end a request.
- **/
-void end_request(struct request *req, int uptodate)
-{
-	int error = 0;
-
-	if (uptodate <= 0)
-		error = uptodate ? uptodate : -EIO;
-
-	__blk_end_request(req, error, req->hard_cur_sectors << 9);
-}
-EXPORT_SYMBOL(end_request);
-
-static int end_that_request_data(struct request *rq, int error,
-				 unsigned int nr_bytes, unsigned int bidi_bytes)
-{
-	if (rq->bio) {
-		if (__end_that_request_first(rq, error, nr_bytes))
-			return 1;
-
-		/* Bidi request must be completed as a whole */
-		if (blk_bidi_rq(rq) &&
-		    __end_that_request_first(rq->next_rq, error, bidi_bytes))
-			return 1;
-	}
-
-	return 0;
-}
-
-/**
  * blk_end_io - Generic end_io function to complete a request.
  * @rq:           the request being processed
  * @error:        %0 for success, < %0 for error
@@ -1993,6 +1966,33 @@ int blk_end_bidi_request(struct request *rq, int error, unsigned int nr_bytes,
 EXPORT_SYMBOL_GPL(blk_end_bidi_request);
 
 /**
+ * end_request - end I/O on the current segment of the request
+ * @req:	the request being processed
+ * @uptodate:	error value or %0/%1 uptodate flag
+ *
+ * Description:
+ *     Ends I/O on the current segment of a request. If that is the only
+ *     remaining segment, the request is also completed and freed.
+ *
+ *     This is a remnant of how older block drivers handled I/O completions.
+ *     Modern drivers typically end I/O on the full request in one go, unless
+ *     they have a residual value to account for. For that case this function
+ *     isn't really useful, unless the residual just happens to be the
+ *     full current segment. In other words, don't use this function in new
+ *     code. Use blk_end_request() or __blk_end_request() to end a request.
+ **/
+void end_request(struct request *req, int uptodate)
+{
+	int error = 0;
+
+	if (uptodate <= 0)
+		error = uptodate ? uptodate : -EIO;
+
+	__blk_end_request(req, error, req->hard_cur_sectors << 9);
+}
+EXPORT_SYMBOL(end_request);
+
+/**
  * blk_update_request - Special helper function for request stacking drivers
  * @rq:           the request being processed
  * @error:        %0 for success, < %0 for error
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 4c05bb9..cdfac4f 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -810,6 +810,14 @@ static inline void blk_run_address_space(struct address_space *mapping)
 extern void blkdev_dequeue_request(struct request *req);
 
 /*
+ * blk_end_request() takes bytes instead of sectors as a complete size.
+ * blk_rq_bytes() returns bytes left to complete in the entire request.
+ * blk_rq_cur_bytes() returns bytes left to complete in the current segment.
+ */
+extern unsigned int blk_rq_bytes(struct request *rq);
+extern unsigned int blk_rq_cur_bytes(struct request *rq);
+
+/*
  * blk_end_request() and friends.
  * __blk_end_request() and end_request() must be called with
  * the request queue spinlock acquired.
@@ -836,14 +844,6 @@ extern void blk_update_request(struct request *rq, int error,
 			       unsigned int nr_bytes);
 
 /*
- * blk_end_request() takes bytes instead of sectors as a complete size.
- * blk_rq_bytes() returns bytes left to complete in the entire request.
- * blk_rq_cur_bytes() returns bytes left to complete in the current segment.
- */
-extern unsigned int blk_rq_bytes(struct request *rq);
-extern unsigned int blk_rq_cur_bytes(struct request *rq);
-
-/*
  * Access functions for manipulating queue properties
  */
 extern struct request_queue *blk_init_queue_node(request_fn_proc *rfn,
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 10/17] block: reorganize request fetching functions
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
                   ` (8 preceding siblings ...)
  2009-03-16  2:28 ` [PATCH 09/17] block: reorder request completion functions Tejun Heo
@ 2009-03-16  2:28 ` Tejun Heo
  2009-03-16  2:28 ` [PATCH 11/17] block: kill blk_end_request_callback() Tejun Heo
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier; +Cc: Tejun Heo

Impact: code reorganization

elv_next_request() and elv_dequeue_request() are public block layer
interface than actual elevator implementation.  They mostly deal with
how requests interact with block layer and low level drivers at the
beginning of rqeuest processing whereas __elv_next_request() is the
actual eleveator request fetching interface.

Move the two functions to blk-core.c.  This prepares for further
interface cleanup.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 block/blk-core.c |   95 ++++++++++++++++++++++++++++++++++++++++
 block/blk.h      |   37 ++++++++++++++++
 block/elevator.c |  128 ------------------------------------------------------
 3 files changed, 132 insertions(+), 128 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index fd9dec3..0d97fbe 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1702,6 +1702,101 @@ unsigned int blk_rq_cur_bytes(struct request *rq)
 }
 EXPORT_SYMBOL_GPL(blk_rq_cur_bytes);
 
+struct request *elv_next_request(struct request_queue *q)
+{
+	struct request *rq;
+	int ret;
+
+	while ((rq = __elv_next_request(q)) != NULL) {
+		if (!(rq->cmd_flags & REQ_STARTED)) {
+			/*
+			 * This is the first time the device driver
+			 * sees this request (possibly after
+			 * requeueing).  Notify IO scheduler.
+			 */
+			if (blk_sorted_rq(rq))
+				elv_activate_rq(q, rq);
+
+			/*
+			 * just mark as started even if we don't start
+			 * it, a request that has been delayed should
+			 * not be passed by new incoming requests
+			 */
+			rq->cmd_flags |= REQ_STARTED;
+			trace_block_rq_issue(q, rq);
+		}
+
+		if (!q->boundary_rq || q->boundary_rq == rq) {
+			q->end_sector = rq_end_sector(rq);
+			q->boundary_rq = NULL;
+		}
+
+		if (rq->cmd_flags & REQ_DONTPREP)
+			break;
+
+		if (q->dma_drain_size && rq->data_len) {
+			/*
+			 * make sure space for the drain appears we
+			 * know we can do this because max_hw_segments
+			 * has been adjusted to be one fewer than the
+			 * device can handle
+			 */
+			rq->nr_phys_segments++;
+		}
+
+		if (!q->prep_rq_fn)
+			break;
+
+		ret = q->prep_rq_fn(q, rq);
+		if (ret == BLKPREP_OK) {
+			break;
+		} else if (ret == BLKPREP_DEFER) {
+			/*
+			 * the request may have been (partially) prepped.
+			 * we need to keep this request in the front to
+			 * avoid resource deadlock.  REQ_STARTED will
+			 * prevent other fs requests from passing this one.
+			 */
+			if (q->dma_drain_size && rq->data_len &&
+			    !(rq->cmd_flags & REQ_DONTPREP)) {
+				/*
+				 * remove the space for the drain we added
+				 * so that we don't add it again
+				 */
+				--rq->nr_phys_segments;
+			}
+
+			rq = NULL;
+			break;
+		} else if (ret == BLKPREP_KILL) {
+			rq->cmd_flags |= REQ_QUIET;
+			__blk_end_request(rq, -EIO, blk_rq_bytes(rq));
+		} else {
+			printk(KERN_ERR "%s: bad return=%d\n", __func__, ret);
+			break;
+		}
+	}
+
+	return rq;
+}
+EXPORT_SYMBOL(elv_next_request);
+
+void elv_dequeue_request(struct request_queue *q, struct request *rq)
+{
+	BUG_ON(list_empty(&rq->queuelist));
+	BUG_ON(ELV_ON_HASH(rq));
+
+	list_del_init(&rq->queuelist);
+
+	/*
+	 * the time frame between a request being removed from the lists
+	 * and to it is freed is accounted as io that is in progress at
+	 * the driver side.
+	 */
+	if (blk_account_rq(rq))
+		q->in_flight++;
+}
+
 /**
  * __end_that_request_first - end I/O on a request
  * @req:      the request being processed
diff --git a/block/blk.h b/block/blk.h
index 0dce92c..3979fd1 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -43,6 +43,43 @@ static inline void blk_clear_rq_complete(struct request *rq)
 	clear_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags);
 }
 
+/*
+ * Internal elevator interface
+ */
+#define ELV_ON_HASH(rq)		(!hlist_unhashed(&(rq)->hash))
+
+static inline struct request *__elv_next_request(struct request_queue *q)
+{
+	struct request *rq;
+
+	while (1) {
+		while (!list_empty(&q->queue_head)) {
+			rq = list_entry_rq(q->queue_head.next);
+			if (blk_do_ordered(q, &rq))
+				return rq;
+		}
+
+		if (!q->elevator->ops->elevator_dispatch_fn(q, 0))
+			return NULL;
+	}
+}
+
+static inline void elv_activate_rq(struct request_queue *q, struct request *rq)
+{
+	struct elevator_queue *e = q->elevator;
+
+	if (e->ops->elevator_activate_req_fn)
+		e->ops->elevator_activate_req_fn(q, rq);
+}
+
+static inline void elv_deactivate_rq(struct request_queue *q, struct request *rq)
+{
+	struct elevator_queue *e = q->elevator;
+
+	if (e->ops->elevator_deactivate_req_fn)
+		e->ops->elevator_deactivate_req_fn(q, rq);
+}
+
 #ifdef CONFIG_FAIL_IO_TIMEOUT
 int blk_should_fake_timeout(struct request_queue *);
 ssize_t part_timeout_show(struct device *, struct device_attribute *, char *);
diff --git a/block/elevator.c b/block/elevator.c
index fca4436..fd17605 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -53,7 +53,6 @@ static const int elv_hash_shift = 6;
 		(hash_long(ELV_HASH_BLOCK((sec)), elv_hash_shift))
 #define ELV_HASH_ENTRIES	(1 << elv_hash_shift)
 #define rq_hash_key(rq)		((rq)->sector + (rq)->nr_sectors)
-#define ELV_ON_HASH(rq)		(!hlist_unhashed(&(rq)->hash))
 
 DEFINE_TRACE(block_rq_insert);
 DEFINE_TRACE(block_rq_issue);
@@ -310,22 +309,6 @@ void elevator_exit(struct elevator_queue *e)
 }
 EXPORT_SYMBOL(elevator_exit);
 
-static void elv_activate_rq(struct request_queue *q, struct request *rq)
-{
-	struct elevator_queue *e = q->elevator;
-
-	if (e->ops->elevator_activate_req_fn)
-		e->ops->elevator_activate_req_fn(q, rq);
-}
-
-static void elv_deactivate_rq(struct request_queue *q, struct request *rq)
-{
-	struct elevator_queue *e = q->elevator;
-
-	if (e->ops->elevator_deactivate_req_fn)
-		e->ops->elevator_deactivate_req_fn(q, rq);
-}
-
 static inline void __elv_rqhash_del(struct request *rq)
 {
 	hlist_del_init(&rq->hash);
@@ -733,117 +716,6 @@ void elv_add_request(struct request_queue *q, struct request *rq, int where,
 }
 EXPORT_SYMBOL(elv_add_request);
 
-static inline struct request *__elv_next_request(struct request_queue *q)
-{
-	struct request *rq;
-
-	while (1) {
-		while (!list_empty(&q->queue_head)) {
-			rq = list_entry_rq(q->queue_head.next);
-			if (blk_do_ordered(q, &rq))
-				return rq;
-		}
-
-		if (!q->elevator->ops->elevator_dispatch_fn(q, 0))
-			return NULL;
-	}
-}
-
-struct request *elv_next_request(struct request_queue *q)
-{
-	struct request *rq;
-	int ret;
-
-	while ((rq = __elv_next_request(q)) != NULL) {
-		if (!(rq->cmd_flags & REQ_STARTED)) {
-			/*
-			 * This is the first time the device driver
-			 * sees this request (possibly after
-			 * requeueing).  Notify IO scheduler.
-			 */
-			if (blk_sorted_rq(rq))
-				elv_activate_rq(q, rq);
-
-			/*
-			 * just mark as started even if we don't start
-			 * it, a request that has been delayed should
-			 * not be passed by new incoming requests
-			 */
-			rq->cmd_flags |= REQ_STARTED;
-			trace_block_rq_issue(q, rq);
-		}
-
-		if (!q->boundary_rq || q->boundary_rq == rq) {
-			q->end_sector = rq_end_sector(rq);
-			q->boundary_rq = NULL;
-		}
-
-		if (rq->cmd_flags & REQ_DONTPREP)
-			break;
-
-		if (q->dma_drain_size && rq->data_len) {
-			/*
-			 * make sure space for the drain appears we
-			 * know we can do this because max_hw_segments
-			 * has been adjusted to be one fewer than the
-			 * device can handle
-			 */
-			rq->nr_phys_segments++;
-		}
-
-		if (!q->prep_rq_fn)
-			break;
-
-		ret = q->prep_rq_fn(q, rq);
-		if (ret == BLKPREP_OK) {
-			break;
-		} else if (ret == BLKPREP_DEFER) {
-			/*
-			 * the request may have been (partially) prepped.
-			 * we need to keep this request in the front to
-			 * avoid resource deadlock.  REQ_STARTED will
-			 * prevent other fs requests from passing this one.
-			 */
-			if (q->dma_drain_size && rq->data_len &&
-			    !(rq->cmd_flags & REQ_DONTPREP)) {
-				/*
-				 * remove the space for the drain we added
-				 * so that we don't add it again
-				 */
-				--rq->nr_phys_segments;
-			}
-
-			rq = NULL;
-			break;
-		} else if (ret == BLKPREP_KILL) {
-			rq->cmd_flags |= REQ_QUIET;
-			__blk_end_request(rq, -EIO, blk_rq_bytes(rq));
-		} else {
-			printk(KERN_ERR "%s: bad return=%d\n", __func__, ret);
-			break;
-		}
-	}
-
-	return rq;
-}
-EXPORT_SYMBOL(elv_next_request);
-
-void elv_dequeue_request(struct request_queue *q, struct request *rq)
-{
-	BUG_ON(list_empty(&rq->queuelist));
-	BUG_ON(ELV_ON_HASH(rq));
-
-	list_del_init(&rq->queuelist);
-
-	/*
-	 * the time frame between a request being removed from the lists
-	 * and to it is freed is accounted as io that is in progress at
-	 * the driver side.
-	 */
-	if (blk_account_rq(rq))
-		q->in_flight++;
-}
-
 int elv_queue_empty(struct request_queue *q)
 {
 	struct elevator_queue *e = q->elevator;
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 11/17] block: kill blk_end_request_callback()
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
                   ` (9 preceding siblings ...)
  2009-03-16  2:28 ` [PATCH 10/17] block: reorganize request fetching functions Tejun Heo
@ 2009-03-16  2:28 ` Tejun Heo
  2009-03-16  2:28 ` [PATCH 12/17] block: clean up request completion API Tejun Heo
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier; +Cc: Tejun Heo

Impact: removal of unused convoluted interface

With recent IDE updates, blk_end_request_callback() doesn't have any
user now.  Kill it.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 block/blk-core.c       |   48 +++---------------------------------------------
 include/linux/blkdev.h |    3 ---
 2 files changed, 3 insertions(+), 48 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 0d97fbe..9595c4f 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1957,10 +1957,6 @@ static void end_that_request_last(struct request *req, int error)
  * @error:        %0 for success, < %0 for error
  * @nr_bytes:     number of bytes to complete @rq
  * @bidi_bytes:   number of bytes to complete @rq->next_rq
- * @drv_callback: function called between completion of bios in the request
- *                and completion of the request.
- *                If the callback returns non %0, this helper returns without
- *                completion of the request.
  *
  * Description:
  *     Ends I/O on a number of bytes attached to @rq and @rq->next_rq.
@@ -1971,8 +1967,7 @@ static void end_that_request_last(struct request *req, int error)
  *     %1 - this request is not freed yet, it still has pending buffers.
  **/
 static int blk_end_io(struct request *rq, int error, unsigned int nr_bytes,
-		      unsigned int bidi_bytes,
-		      int (drv_callback)(struct request *))
+		      unsigned int bidi_bytes)
 {
 	struct request_queue *q = rq->q;
 	unsigned long flags = 0UL;
@@ -1980,10 +1975,6 @@ static int blk_end_io(struct request *rq, int error, unsigned int nr_bytes,
 	if (end_that_request_data(rq, error, nr_bytes, bidi_bytes))
 		return 1;
 
-	/* Special feature for tricky drivers */
-	if (drv_callback && drv_callback(rq))
-		return 1;
-
 	add_disk_randomness(rq->rq_disk);
 
 	spin_lock_irqsave(q->queue_lock, flags);
@@ -2009,7 +2000,7 @@ static int blk_end_io(struct request *rq, int error, unsigned int nr_bytes,
  **/
 int blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
 {
-	return blk_end_io(rq, error, nr_bytes, 0, NULL);
+	return blk_end_io(rq, error, nr_bytes, 0);
 }
 EXPORT_SYMBOL_GPL(blk_end_request);
 
@@ -2056,7 +2047,7 @@ EXPORT_SYMBOL_GPL(__blk_end_request);
 int blk_end_bidi_request(struct request *rq, int error, unsigned int nr_bytes,
 			 unsigned int bidi_bytes)
 {
-	return blk_end_io(rq, error, nr_bytes, bidi_bytes, NULL);
+	return blk_end_io(rq, error, nr_bytes, bidi_bytes);
 }
 EXPORT_SYMBOL_GPL(blk_end_bidi_request);
 
@@ -2117,39 +2108,6 @@ void blk_update_request(struct request *rq, int error, unsigned int nr_bytes)
 }
 EXPORT_SYMBOL_GPL(blk_update_request);
 
-/**
- * blk_end_request_callback - Special helper function for tricky drivers
- * @rq:           the request being processed
- * @error:        %0 for success, < %0 for error
- * @nr_bytes:     number of bytes to complete
- * @drv_callback: function called between completion of bios in the request
- *                and completion of the request.
- *                If the callback returns non %0, this helper returns without
- *                completion of the request.
- *
- * Description:
- *     Ends I/O on a number of bytes attached to @rq.
- *     If @rq has leftover, sets it up for the next range of segments.
- *
- *     This special helper function is used only for existing tricky drivers.
- *     (e.g. cdrom_newpc_intr() of ide-cd)
- *     This interface will be removed when such drivers are rewritten.
- *     Don't use this interface in other places anymore.
- *
- * Return:
- *     %0 - we are done with this request
- *     %1 - this request is not freed yet.
- *          this request still has pending buffers or
- *          the driver doesn't want to finish this request yet.
- **/
-int blk_end_request_callback(struct request *rq, int error,
-			     unsigned int nr_bytes,
-			     int (drv_callback)(struct request *))
-{
-	return blk_end_io(rq, error, nr_bytes, 0, drv_callback);
-}
-EXPORT_SYMBOL_GPL(blk_end_request_callback);
-
 void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
 		     struct bio *bio)
 {
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index cdfac4f..e8175c8 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -833,9 +833,6 @@ extern int __blk_end_request(struct request *rq, int error,
 extern int blk_end_bidi_request(struct request *rq, int error,
 				unsigned int nr_bytes, unsigned int bidi_bytes);
 extern void end_request(struct request *, int);
-extern int blk_end_request_callback(struct request *rq, int error,
-				unsigned int nr_bytes,
-				int (drv_callback)(struct request *));
 extern void blk_complete_request(struct request *);
 extern void __blk_complete_request(struct request *);
 extern void blk_abort_request(struct request *);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 12/17] block: clean up request completion API
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
                   ` (10 preceding siblings ...)
  2009-03-16  2:28 ` [PATCH 11/17] block: kill blk_end_request_callback() Tejun Heo
@ 2009-03-16  2:28 ` Tejun Heo
  2009-03-16  2:28 ` [PATCH 13/17] block: move rq->start_time initialization to blk_rq_init() Tejun Heo
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier; +Cc: Tejun Heo

Impact: cleanup, rq->*nr_sectors always updated after req completion

Request completion has gone through several changes and became a bit
messy over the time.  Clean it up.

1. end_that_request_data() is a thin wrapper around
   end_that_request_data_first() which checks whether bio is NULL
   before doing anything and handles bidi completion.
   blk_update_request() is a thin wrapper around
   end_that_request_data() which clears nr_sectors on the last
   iteration but doesn't use the bidi completion.

   Clean it up by moving the initial bio NULL check and nr_sectors
   clearing on the last iteration into end_that_request_data() and
   renaming it to blk_update_request(), which makes blk_end_io() the
   only user of end_that_request_data().  Collapse
   end_that_request_data() into blk_end_io().

2. There are four visible completion variants - blk_end_request(),
   __blk_end_request(), blk_end_bidi_request() and end_request().
   blk_end_request() and blk_end_bidi_request() uses blk_end_request()
   as the backend but __blk_end_request() and end_request() use
   separate implementation in __blk_end_request() due to different
   locking rules.

   Make blk_end_io() handle both cases thus all four public completion
   functions are thin wrappers around blk_end_io().  Rename
   blk_end_io() to __blk_end_io() and export it and inline all public
   completion functions.

3. As the whole request issue/completion usages are about to be
   modified and audited, it's a good chance to convert completion
   functions return bool which better indicates the intended meaning
   of return values.

4. The function name end_that_request_last() is from the days when it
   was a public interface and slighly confusing.  Give it a proper
   internal name - finish_request().

The only visible behavior change is from #1.  nr_sectors counts are
cleared after the final iteration no matter which function is used to
complete the request.  I couldn't find any place where the code
assumes those nr_sectors counters contain the values for the last
segment and this change is good as it makes the API much more
consistent as the end result is now same whether a request is
completed using [__]blk_end_request() alone or in combination with
blk_update_request().

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 block/blk-core.c       |  215 ++++++++++++------------------------------------
 include/linux/blkdev.h |  114 +++++++++++++++++++++++---
 2 files changed, 154 insertions(+), 175 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 9595c4f..b1781dd 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1798,25 +1798,35 @@ void elv_dequeue_request(struct request_queue *q, struct request *rq)
 }
 
 /**
- * __end_that_request_first - end I/O on a request
- * @req:      the request being processed
+ * blk_update_request - Special helper function for request stacking drivers
+ * @rq:	      the request being processed
  * @error:    %0 for success, < %0 for error
- * @nr_bytes: number of bytes to complete
+ * @nr_bytes: number of bytes to complete @rq
  *
  * Description:
- *     Ends I/O on a number of bytes attached to @req, and sets it up
- *     for the next range of segments (if any) in the cluster.
+ *     Ends I/O on a number of bytes attached to @rq, but doesn't complete
+ *     the request structure even if @rq doesn't have leftover.
+ *     If @rq has leftover, sets it up for the next range of segments.
+ *
+ *     This special helper function is only for request stacking drivers
+ *     (e.g. request-based dm) so that they can handle partial completion.
+ *     Actual device drivers should use blk_end_request instead.
+ *
+ *     Passing the result of blk_rq_bytes() as @nr_bytes guarantees
+ *     %false return from this function.
  *
  * Return:
- *     %0 - we are done with this request, call end_that_request_last()
- *     %1 - still buffers pending for this request
+ *     %false - this request doesn't have any more data
+ *     %true  - this request has more data
  **/
-static int __end_that_request_first(struct request *req, int error,
-				    int nr_bytes)
+bool blk_update_request(struct request *req, int error, unsigned int nr_bytes)
 {
 	int total_bytes, bio_nbytes, next_idx = 0;
 	struct bio *bio;
 
+	if (!req->bio)
+		return false;
+
 	trace_block_rq_complete(req->q, req);
 
 	/*
@@ -1889,8 +1899,16 @@ static int __end_that_request_first(struct request *req, int error,
 	/*
 	 * completely done
 	 */
-	if (!req->bio)
-		return 0;
+	if (!req->bio) {
+		/*
+		 * Reset counters so that the request stacking driver
+		 * can find how many bytes remain in the request
+		 * later.
+		 */
+		req->nr_sectors = req->hard_nr_sectors = 0;
+		req->current_nr_sectors = req->hard_cur_sectors = 0;
+		return false;
+	}
 
 	/*
 	 * if the request wasn't completed, update state
@@ -1904,29 +1922,14 @@ static int __end_that_request_first(struct request *req, int error,
 
 	blk_recalc_rq_sectors(req, total_bytes >> 9);
 	blk_recalc_rq_segments(req);
-	return 1;
-}
-
-static int end_that_request_data(struct request *rq, int error,
-				 unsigned int nr_bytes, unsigned int bidi_bytes)
-{
-	if (rq->bio) {
-		if (__end_that_request_first(rq, error, nr_bytes))
-			return 1;
-
-		/* Bidi request must be completed as a whole */
-		if (blk_bidi_rq(rq) &&
-		    __end_that_request_first(rq->next_rq, error, bidi_bytes))
-			return 1;
-	}
-
-	return 0;
+	return true;
 }
+EXPORT_SYMBOL_GPL(blk_update_request);
 
 /*
  * queue lock must be held
  */
-static void end_that_request_last(struct request *req, int error)
+static void finish_request(struct request *req, int error)
 {
 	if (blk_rq_tagged(req))
 		blk_queue_end_tag(req->q, req);
@@ -1952,161 +1955,47 @@ static void end_that_request_last(struct request *req, int error)
 }
 
 /**
- * blk_end_io - Generic end_io function to complete a request.
+ * __blk_end_io - Generic end_io function to complete a request.
  * @rq:           the request being processed
  * @error:        %0 for success, < %0 for error
  * @nr_bytes:     number of bytes to complete @rq
  * @bidi_bytes:   number of bytes to complete @rq->next_rq
+ * @locked:	  whether rq->q->queue_lock is held on entry
  *
  * Description:
  *     Ends I/O on a number of bytes attached to @rq and @rq->next_rq.
  *     If @rq has leftover, sets it up for the next range of segments.
  *
  * Return:
- *     %0 - we are done with this request
- *     %1 - this request is not freed yet, it still has pending buffers.
+ *     %false - we are done with this request
+ *     %true  - this request is not freed yet, it still has pending buffers.
  **/
-static int blk_end_io(struct request *rq, int error, unsigned int nr_bytes,
-		      unsigned int bidi_bytes)
+bool __blk_end_io(struct request *rq, int error, unsigned int nr_bytes,
+		  unsigned int bidi_bytes, bool locked)
 {
 	struct request_queue *q = rq->q;
 	unsigned long flags = 0UL;
 
-	if (end_that_request_data(rq, error, nr_bytes, bidi_bytes))
-		return 1;
-
-	add_disk_randomness(rq->rq_disk);
-
-	spin_lock_irqsave(q->queue_lock, flags);
-	end_that_request_last(rq, error);
-	spin_unlock_irqrestore(q->queue_lock, flags);
-
-	return 0;
-}
+	if (blk_update_request(rq, error, nr_bytes))
+		return true;
 
-/**
- * blk_end_request - Helper function for drivers to complete the request.
- * @rq:       the request being processed
- * @error:    %0 for success, < %0 for error
- * @nr_bytes: number of bytes to complete
- *
- * Description:
- *     Ends I/O on a number of bytes attached to @rq.
- *     If @rq has leftover, sets it up for the next range of segments.
- *
- * Return:
- *     %0 - we are done with this request
- *     %1 - still buffers pending for this request
- **/
-int blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
-{
-	return blk_end_io(rq, error, nr_bytes, 0);
-}
-EXPORT_SYMBOL_GPL(blk_end_request);
-
-/**
- * __blk_end_request - Helper function for drivers to complete the request.
- * @rq:       the request being processed
- * @error:    %0 for success, < %0 for error
- * @nr_bytes: number of bytes to complete
- *
- * Description:
- *     Must be called with queue lock held unlike blk_end_request().
- *
- * Return:
- *     %0 - we are done with this request
- *     %1 - still buffers pending for this request
- **/
-int __blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
-{
-	if (rq->bio && __end_that_request_first(rq, error, nr_bytes))
-		return 1;
+	/* Bidi request must be completed as a whole */
+	if (unlikely(blk_bidi_rq(rq)) &&
+	    blk_update_request(rq->next_rq, error, bidi_bytes))
+		return true;
 
 	add_disk_randomness(rq->rq_disk);
 
-	end_that_request_last(rq, error);
-
-	return 0;
-}
-EXPORT_SYMBOL_GPL(__blk_end_request);
-
-/**
- * blk_end_bidi_request - Helper function for drivers to complete bidi request.
- * @rq:         the bidi request being processed
- * @error:      %0 for success, < %0 for error
- * @nr_bytes:   number of bytes to complete @rq
- * @bidi_bytes: number of bytes to complete @rq->next_rq
- *
- * Description:
- *     Ends I/O on a number of bytes attached to @rq and @rq->next_rq.
- *
- * Return:
- *     %0 - we are done with this request
- *     %1 - still buffers pending for this request
- **/
-int blk_end_bidi_request(struct request *rq, int error, unsigned int nr_bytes,
-			 unsigned int bidi_bytes)
-{
-	return blk_end_io(rq, error, nr_bytes, bidi_bytes);
-}
-EXPORT_SYMBOL_GPL(blk_end_bidi_request);
-
-/**
- * end_request - end I/O on the current segment of the request
- * @req:	the request being processed
- * @uptodate:	error value or %0/%1 uptodate flag
- *
- * Description:
- *     Ends I/O on the current segment of a request. If that is the only
- *     remaining segment, the request is also completed and freed.
- *
- *     This is a remnant of how older block drivers handled I/O completions.
- *     Modern drivers typically end I/O on the full request in one go, unless
- *     they have a residual value to account for. For that case this function
- *     isn't really useful, unless the residual just happens to be the
- *     full current segment. In other words, don't use this function in new
- *     code. Use blk_end_request() or __blk_end_request() to end a request.
- **/
-void end_request(struct request *req, int uptodate)
-{
-	int error = 0;
-
-	if (uptodate <= 0)
-		error = uptodate ? uptodate : -EIO;
-
-	__blk_end_request(req, error, req->hard_cur_sectors << 9);
-}
-EXPORT_SYMBOL(end_request);
+	if (!locked) {
+		spin_lock_irqsave(q->queue_lock, flags);
+		finish_request(rq, error);
+		spin_unlock_irqrestore(q->queue_lock, flags);
+	} else
+		finish_request(rq, error);
 
-/**
- * blk_update_request - Special helper function for request stacking drivers
- * @rq:           the request being processed
- * @error:        %0 for success, < %0 for error
- * @nr_bytes:     number of bytes to complete @rq
- *
- * Description:
- *     Ends I/O on a number of bytes attached to @rq, but doesn't complete
- *     the request structure even if @rq doesn't have leftover.
- *     If @rq has leftover, sets it up for the next range of segments.
- *
- *     This special helper function is only for request stacking drivers
- *     (e.g. request-based dm) so that they can handle partial completion.
- *     Actual device drivers should use blk_end_request instead.
- */
-void blk_update_request(struct request *rq, int error, unsigned int nr_bytes)
-{
-	if (!end_that_request_data(rq, error, nr_bytes, 0)) {
-		/*
-		 * These members are not updated in end_that_request_data()
-		 * when all bios are completed.
-		 * Update them so that the request stacking driver can find
-		 * how many bytes remain in the request later.
-		 */
-		rq->nr_sectors = rq->hard_nr_sectors = 0;
-		rq->current_nr_sectors = rq->hard_cur_sectors = 0;
-	}
+	return false;
 }
-EXPORT_SYMBOL_GPL(blk_update_request);
+EXPORT_SYMBOL_GPL(__blk_end_io);
 
 void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
 		     struct bio *bio)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index e8175c8..cb2f9ae 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -818,27 +818,117 @@ extern unsigned int blk_rq_bytes(struct request *rq);
 extern unsigned int blk_rq_cur_bytes(struct request *rq);
 
 /*
- * blk_end_request() and friends.
- * __blk_end_request() and end_request() must be called with
- * the request queue spinlock acquired.
+ * Request completion related functions.
+ *
+ * blk_update_request() completes given number of bytes and updates
+ * the request without completing it.
+ *
+ * blk_end_request() and friends.  __blk_end_request() and
+ * end_request() must be called with the request queue spinlock
+ * acquired.
  *
  * Several drivers define their own end_request and call
  * blk_end_request() for parts of the original function.
  * This prevents code duplication in drivers.
  */
-extern int blk_end_request(struct request *rq, int error,
-				unsigned int nr_bytes);
-extern int __blk_end_request(struct request *rq, int error,
-				unsigned int nr_bytes);
-extern int blk_end_bidi_request(struct request *rq, int error,
-				unsigned int nr_bytes, unsigned int bidi_bytes);
-extern void end_request(struct request *, int);
+extern bool blk_update_request(struct request *rq, int error,
+			       unsigned int nr_bytes);
+
+/* internal function, subject to change, don't ever use directly */
+extern bool __blk_end_io(struct request *rq, int error,
+			 unsigned int nr_bytes, unsigned int bidi_bytes,
+			 bool locked);
+
+/**
+ * blk_end_request - Helper function for drivers to complete the request.
+ * @rq:       the request being processed
+ * @error:    %0 for success, < %0 for error
+ * @nr_bytes: number of bytes to complete
+ *
+ * Description:
+ *     Ends I/O on a number of bytes attached to @rq.
+ *     If @rq has leftover, sets it up for the next range of segments.
+ *
+ * Return:
+ *     %false - we are done with this request
+ *     %true  - still buffers pending for this request
+ **/
+static inline bool blk_end_request(struct request *rq, int error,
+				   unsigned int nr_bytes)
+{
+	return __blk_end_io(rq, error, nr_bytes, 0, false);
+}
+
+/**
+ * __blk_end_request - Helper function for drivers to complete the request.
+ * @rq:       the request being processed
+ * @error:    %0 for success, < %0 for error
+ * @nr_bytes: number of bytes to complete
+ *
+ * Description:
+ *     Must be called with queue lock held unlike blk_end_request().
+ *
+ * Return:
+ *     %false - we are done with this request
+ *     %true  - still buffers pending for this request
+ **/
+static inline bool __blk_end_request(struct request *rq, int error,
+				     unsigned int nr_bytes)
+{
+	return __blk_end_io(rq, error, nr_bytes, 0, true);
+}
+
+/**
+ * blk_end_bidi_request - Helper function for drivers to complete bidi request.
+ * @rq:         the bidi request being processed
+ * @error:      %0 for success, < %0 for error
+ * @nr_bytes:   number of bytes to complete @rq
+ * @bidi_bytes: number of bytes to complete @rq->next_rq
+ *
+ * Description:
+ *     Ends I/O on a number of bytes attached to @rq and @rq->next_rq.
+ *
+ * Return:
+ *     %false - we are done with this request
+ *     %true  - still buffers pending for this request
+ **/
+static inline bool blk_end_bidi_request(struct request *rq, int error,
+					unsigned int nr_bytes,
+					unsigned int bidi_bytes)
+{
+	return __blk_end_io(rq, error, nr_bytes, bidi_bytes, false);
+}
+
+/**
+ * end_request - end I/O on the current segment of the request
+ * @rq:		the request being processed
+ * @uptodate:	error value or %0/%1 uptodate flag
+ *
+ * Description:
+ *     Ends I/O on the current segment of a request. If that is the only
+ *     remaining segment, the request is also completed and freed.
+ *
+ *     This is a remnant of how older block drivers handled I/O completions.
+ *     Modern drivers typically end I/O on the full request in one go, unless
+ *     they have a residual value to account for. For that case this function
+ *     isn't really useful, unless the residual just happens to be the
+ *     full current segment. In other words, don't use this function in new
+ *     code. Use blk_end_request() or __blk_end_request() to end a request.
+ **/
+static inline void end_request(struct request *rq, int uptodate)
+{
+	int error = 0;
+
+	if (uptodate <= 0)
+		error = uptodate ? uptodate : -EIO;
+
+	__blk_end_io(rq, error, rq->hard_cur_sectors << 9, 0, true);
+}
+
 extern void blk_complete_request(struct request *);
 extern void __blk_complete_request(struct request *);
 extern void blk_abort_request(struct request *);
 extern void blk_abort_queue(struct request_queue *);
-extern void blk_update_request(struct request *rq, int error,
-			       unsigned int nr_bytes);
 
 /*
  * Access functions for manipulating queue properties
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 13/17] block: move rq->start_time initialization to blk_rq_init()
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
                   ` (11 preceding siblings ...)
  2009-03-16  2:28 ` [PATCH 12/17] block: clean up request completion API Tejun Heo
@ 2009-03-16  2:28 ` Tejun Heo
  2009-03-16  2:28 ` [PATCH 14/17] block: implement and use [__]blk_end_request_all() Tejun Heo
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier; +Cc: Tejun Heo

Impact: rq->start_time is valid for all requests

rq->start_time was initialized in init_request_from_bio() so special
requests didn't have start_time set.  This has been okay as start_time
has been used only for fs requests; however, there is no indication of
this actually is the case or not.  Set rq->start_time in blk_rq_init()
and guarantee that all initialized rq's have its start_time set.  This
improves consistency at virtually no cost and future changes will make
use of the timestamp for !bio requests.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 block/blk-core.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index b1781dd..7d0ab48 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -134,6 +134,7 @@ void blk_rq_init(struct request_queue *q, struct request *rq)
 	rq->cmd = rq->__cmd;
 	rq->tag = -1;
 	rq->ref_count = 1;
+	rq->start_time = jiffies;
 }
 EXPORT_SYMBOL(blk_rq_init);
 
@@ -1094,7 +1095,6 @@ void init_request_from_bio(struct request *req, struct bio *bio)
 	req->errors = 0;
 	req->hard_sector = req->sector = bio->bi_sector;
 	req->ioprio = bio_prio(bio);
-	req->start_time = jiffies;
 	blk_rq_bio_prep(req->q, req, bio);
 }
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 14/17] block: implement and use [__]blk_end_request_all()
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
                   ` (12 preceding siblings ...)
  2009-03-16  2:28 ` [PATCH 13/17] block: move rq->start_time initialization to blk_rq_init() Tejun Heo
@ 2009-03-16  2:28 ` Tejun Heo
  2009-03-16  2:28 ` [PATCH 15/17] block: kill end_request() Tejun Heo
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier
  Cc: Tejun Heo, Russell King, Stephen Rothwell, Mike Miller,
	Martin Schwidefsky, Jeff Garzik, Rusty Russell,
	Jeremy Fitzhardinge, Alex Dubov, James Bottomley

Impact: cleanup

There are many [__]blk_end_request() call sites which call it with
full request length and expect full completion.  Many of them ensure
that the request actually completes by doing BUG_ON() the return
value, which is awkward and error-prone.

This patch adds [__]blk_end_request_all() which takes @rq and @error
and fully completes the request.  BUG_ON() is added to
blk_update_request() to ensure that this actually happens.

Most conversions are simple but there are a few noteworthy ones.

* cdrom/viocd: viocd_end_request() replaced with direct calls to
  __blk_end_request_all().

* s390/block/dasd: dasd_end_request() replaced with direct calls to
  __blk_end_request_all().

* s390/char/tape_block: tapeblock_end_request() replaced with direct
  calls to blk_end_request_all().

IDE needs non-trivial changes and will be updated later.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Mike Miller <mike.miller@hp.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Jeff Garzik <jgarzik@pobox.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Alex Dubov <oakad@yahoo.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
---
 arch/arm/plat-omap/mailbox.c        |   11 +++--------
 block/blk-barrier.c                 |    9 ++-------
 block/blk-core.c                    |    2 +-
 block/elevator.c                    |    2 +-
 drivers/block/cciss.c               |    3 +--
 drivers/block/cpqarray.c            |    3 +--
 drivers/block/sx8.c                 |    3 +--
 drivers/block/virtio_blk.c          |    2 +-
 drivers/block/xen-blkfront.c        |    4 +---
 drivers/cdrom/gdrom.c               |    2 +-
 drivers/cdrom/viocd.c               |   25 ++++---------------------
 drivers/memstick/core/mspro_block.c |    2 +-
 drivers/s390/block/dasd.c           |   17 ++++-------------
 drivers/s390/char/tape_block.c      |   15 ++++-----------
 drivers/scsi/scsi_lib.c             |    2 +-
 include/linux/blkdev.h              |   32 ++++++++++++++++++++++++++++++++
 16 files changed, 59 insertions(+), 75 deletions(-)

diff --git a/arch/arm/plat-omap/mailbox.c b/arch/arm/plat-omap/mailbox.c
index b52ce05..bb83e84 100644
--- a/arch/arm/plat-omap/mailbox.c
+++ b/arch/arm/plat-omap/mailbox.c
@@ -116,8 +116,7 @@ static void mbox_tx_work(struct work_struct *work)
 		}
 
 		spin_lock(q->queue_lock);
-		if (__blk_end_request(rq, 0, 0))
-			BUG();
+		__blk_end_request_all(rq, 0);
 		spin_unlock(q->queue_lock);
 	}
 }
@@ -148,10 +147,7 @@ static void mbox_rx_work(struct work_struct *work)
 			break;
 
 		msg = (mbox_msg_t) rq->data;
-
-		if (blk_end_request(rq, 0, 0))
-			BUG();
-
+		blk_end_request_all(rq, 0);
 		mbox->rxq->callback((void *)msg);
 	}
 }
@@ -261,8 +257,7 @@ omap_mbox_read(struct device *dev, struct device_attribute *attr, char *buf)
 
 		*p = (mbox_msg_t) rq->data;
 
-		if (blk_end_request(rq, 0, 0))
-			BUG();
+		blk_end_request_all(rq, 0);
 
 		if (unlikely(mbox_seq_test(mbox, *p))) {
 			pr_info("mbox: Illegal seq bit!(%08x) ignored\n", *p);
diff --git a/block/blk-barrier.c b/block/blk-barrier.c
index f7dae57..bac1de1 100644
--- a/block/blk-barrier.c
+++ b/block/blk-barrier.c
@@ -106,10 +106,7 @@ bool blk_ordered_complete_seq(struct request_queue *q, unsigned seq, int error)
 	 */
 	q->ordseq = 0;
 	rq = q->orig_bar_rq;
-
-	if (__blk_end_request(rq, q->orderr, blk_rq_bytes(rq)))
-		BUG();
-
+	__blk_end_request_all(rq, q->orderr);
 	return true;
 }
 
@@ -252,9 +249,7 @@ bool blk_do_ordered(struct request_queue *q, struct request **rqp)
 			 * with prejudice.
 			 */
 			elv_dequeue_request(q, rq);
-			if (__blk_end_request(rq, -EOPNOTSUPP,
-					      blk_rq_bytes(rq)))
-				BUG();
+			__blk_end_request_all(rq, -EOPNOTSUPP);
 			*rqp = NULL;
 			return false;
 		}
diff --git a/block/blk-core.c b/block/blk-core.c
index 7d0ab48..f9118c0 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1770,7 +1770,7 @@ struct request *elv_next_request(struct request_queue *q)
 			break;
 		} else if (ret == BLKPREP_KILL) {
 			rq->cmd_flags |= REQ_QUIET;
-			__blk_end_request(rq, -EIO, blk_rq_bytes(rq));
+			__blk_end_request_all(rq, -EIO);
 		} else {
 			printk(KERN_ERR "%s: bad return=%d\n", __func__, ret);
 			break;
diff --git a/block/elevator.c b/block/elevator.c
index fd17605..54d01b8 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -785,7 +785,7 @@ void elv_abort_queue(struct request_queue *q)
 		rq = list_entry_rq(q->queue_head.next);
 		rq->cmd_flags |= REQ_QUIET;
 		trace_block_rq_abort(q, rq);
-		__blk_end_request(rq, -EIO, blk_rq_bytes(rq));
+		__blk_end_request_all(rq, -EIO);
 	}
 }
 EXPORT_SYMBOL(elv_abort_queue);
diff --git a/drivers/block/cciss.c b/drivers/block/cciss.c
index 5d0e135..b78339f 100644
--- a/drivers/block/cciss.c
+++ b/drivers/block/cciss.c
@@ -1308,8 +1308,7 @@ static void cciss_softirq_done(struct request *rq)
 	printk("Done with %p\n", rq);
 #endif				/* CCISS_DEBUG */
 
-	if (blk_end_request(rq, (rq->errors == 0) ? 0 : -EIO, blk_rq_bytes(rq)))
-		BUG();
+	blk_end_request_all(rq, (rq->errors == 0) ? 0 : -EIO);
 
 	spin_lock_irqsave(&h->lock, flags);
 	cmd_free(h, cmd, 1);
diff --git a/drivers/block/cpqarray.c b/drivers/block/cpqarray.c
index 5d39df1..473af67 100644
--- a/drivers/block/cpqarray.c
+++ b/drivers/block/cpqarray.c
@@ -1023,8 +1023,7 @@ static inline void complete_command(cmdlist_t *cmd, int timeout)
 				cmd->req.sg[i].size, ddir);
 
 	DBGPX(printk("Done with %p\n", rq););
-	if (__blk_end_request(rq, error, blk_rq_bytes(rq)))
-		BUG();
+	__blk_end_request_all(rq, error);
 }
 
 /*
diff --git a/drivers/block/sx8.c b/drivers/block/sx8.c
index a18e1ca..3ba4437 100644
--- a/drivers/block/sx8.c
+++ b/drivers/block/sx8.c
@@ -749,8 +749,7 @@ static inline void carm_end_request_queued(struct carm_host *host,
 	struct request *req = crq->rq;
 	int rc;
 
-	rc = __blk_end_request(req, error, blk_rq_bytes(req));
-	assert(rc == 0);
+	__blk_end_request_all(req, error);
 
 	rc = carm_put_request(host, crq);
 	assert(rc == 0);
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 5d34764..50745e6 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -62,7 +62,7 @@ static void blk_done(struct virtqueue *vq)
 			break;
 		}
 
-		__blk_end_request(vbr->req, error, blk_rq_bytes(vbr->req));
+		__blk_end_request_all(vbr->req, error);
 		list_del(&vbr->list);
 		mempool_free(vbr, vblk->pool);
 	}
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 8f90508..cd6cfe3 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -551,7 +551,6 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 
 	for (i = info->ring.rsp_cons; i != rp; i++) {
 		unsigned long id;
-		int ret;
 
 		bret = RING_GET_RESPONSE(&info->ring, i);
 		id   = bret->id;
@@ -578,8 +577,7 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 				dev_dbg(&info->xbdev->dev, "Bad return from blkdev data "
 					"request: %x\n", bret->status);
 
-			ret = __blk_end_request(req, error, blk_rq_bytes(req));
-			BUG_ON(ret);
+			__blk_end_request_all(req, error);
 			break;
 		default:
 			BUG();
diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c
index 2eecb77..fee9a9e 100644
--- a/drivers/cdrom/gdrom.c
+++ b/drivers/cdrom/gdrom.c
@@ -632,7 +632,7 @@ static void gdrom_readdisk_dma(struct work_struct *work)
 		* before handling ending the request */
 		spin_lock(&gdrom_lock);
 		list_del_init(&req->queuelist);
-		__blk_end_request(req, err, blk_rq_bytes(req));
+		__blk_end_request_all(req, err);
 	}
 	spin_unlock(&gdrom_lock);
 	kfree(read_command);
diff --git a/drivers/cdrom/viocd.c b/drivers/cdrom/viocd.c
index 1392935..cc3efa0 100644
--- a/drivers/cdrom/viocd.c
+++ b/drivers/cdrom/viocd.c
@@ -291,23 +291,6 @@ static int send_request(struct request *req)
 	return 0;
 }
 
-static void viocd_end_request(struct request *req, int error)
-{
-	int nsectors = req->hard_nr_sectors;
-
-	/*
-	 * Make sure it's fully ended, and ensure that we process
-	 * at least one sector.
-	 */
-	if (blk_pc_request(req))
-		nsectors = (req->data_len + 511) >> 9;
-	if (!nsectors)
-		nsectors = 1;
-
-	if (__blk_end_request(req, error, nsectors << 9))
-		BUG();
-}
-
 static int rwreq;
 
 static void do_viocd_request(struct request_queue *q)
@@ -316,11 +299,11 @@ static void do_viocd_request(struct request_queue *q)
 
 	while ((rwreq == 0) && ((req = elv_next_request(q)) != NULL)) {
 		if (!blk_fs_request(req))
-			viocd_end_request(req, -EIO);
+			__blk_end_request_all(req, -EIO);
 		else if (send_request(req) < 0) {
 			printk(VIOCD_KERN_WARNING
 					"unable to send message to OS/400!");
-			viocd_end_request(req, -EIO);
+			__blk_end_request_all(req, -EIO);
 		} else
 			rwreq++;
 	}
@@ -531,9 +514,9 @@ return_complete:
 					"with rc %d:0x%04X: %s\n",
 					req, event->xRc,
 					bevent->sub_result, err->msg);
-			viocd_end_request(req, -EIO);
+			__blk_end_request_all(req, -EIO);
 		} else
-			viocd_end_request(req, 0);
+			__blk_end_request_all(req, 0);
 
 		/* restart handling of incoming requests */
 		spin_unlock_irqrestore(&viocd_reqlock, flags);
diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c
index de143de..a416346 100644
--- a/drivers/memstick/core/mspro_block.c
+++ b/drivers/memstick/core/mspro_block.c
@@ -826,7 +826,7 @@ static void mspro_block_submit_req(struct request_queue *q)
 
 	if (msb->eject) {
 		while ((req = elv_next_request(q)) != NULL)
-			__blk_end_request(req, -ENODEV, blk_rq_bytes(req));
+			__blk_end_request_all(req, -ENODEV);
 
 		return;
 	}
diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
index 08c23a9..bc172ee 100644
--- a/drivers/s390/block/dasd.c
+++ b/drivers/s390/block/dasd.c
@@ -1597,15 +1597,6 @@ void dasd_block_clear_timer(struct dasd_block *block)
 }
 
 /*
- * posts the buffer_cache about a finalized request
- */
-static inline void dasd_end_request(struct request *req, int error)
-{
-	if (__blk_end_request(req, error, blk_rq_bytes(req)))
-		BUG();
-}
-
-/*
  * Process finished error recovery ccw.
  */
 static inline void __dasd_block_process_erp(struct dasd_block *block,
@@ -1659,7 +1650,7 @@ static void __dasd_process_request_queue(struct dasd_block *block)
 				      "Rejecting write request %p",
 				      req);
 			blkdev_dequeue_request(req);
-			dasd_end_request(req, -EIO);
+			__blk_end_request_all(req, -EIO);
 			continue;
 		}
 		cqr = basedev->discipline->build_cp(basedev, block, req);
@@ -1688,7 +1679,7 @@ static void __dasd_process_request_queue(struct dasd_block *block)
 				      "on request %p",
 				      PTR_ERR(cqr), req);
 			blkdev_dequeue_request(req);
-			dasd_end_request(req, -EIO);
+			__blk_end_request_all(req, -EIO);
 			continue;
 		}
 		/*
@@ -1714,7 +1705,7 @@ static void __dasd_cleanup_cqr(struct dasd_ccw_req *cqr)
 	status = cqr->block->base->discipline->free_cp(cqr, req);
 	if (status <= 0)
 		error = status ? status : -EIO;
-	dasd_end_request(req, error);
+	__blk_end_request_all(req, error);
 }
 
 /*
@@ -2020,7 +2011,7 @@ static void dasd_flush_request_queue(struct dasd_block *block)
 	spin_lock_irq(&block->request_queue_lock);
 	while ((req = elv_next_request(block->request_queue))) {
 		blkdev_dequeue_request(req);
-		dasd_end_request(req, -EIO);
+		__blk_end_request_all(req, -EIO);
 	}
 	spin_unlock_irq(&block->request_queue_lock);
 }
diff --git a/drivers/s390/char/tape_block.c b/drivers/s390/char/tape_block.c
index ae18baf..2736291 100644
--- a/drivers/s390/char/tape_block.c
+++ b/drivers/s390/char/tape_block.c
@@ -74,13 +74,6 @@ tapeblock_trigger_requeue(struct tape_device *device)
  * Post finished request.
  */
 static void
-tapeblock_end_request(struct request *req, int error)
-{
-	if (blk_end_request(req, error, blk_rq_bytes(req)))
-		BUG();
-}
-
-static void
 __tapeblock_end_request(struct tape_request *ccw_req, void *data)
 {
 	struct tape_device *device;
@@ -90,7 +83,7 @@ __tapeblock_end_request(struct tape_request *ccw_req, void *data)
 
 	device = ccw_req->device;
 	req = (struct request *) data;
-	tapeblock_end_request(req, (ccw_req->rc == 0) ? 0 : -EIO);
+	blk_end_request_all(req, (ccw_req->rc == 0) ? 0 : -EIO);
 	if (ccw_req->rc == 0)
 		/* Update position. */
 		device->blk_data.block_position =
@@ -118,7 +111,7 @@ tapeblock_start_request(struct tape_device *device, struct request *req)
 	ccw_req = device->discipline->bread(device, req);
 	if (IS_ERR(ccw_req)) {
 		DBF_EVENT(1, "TBLOCK: bread failed\n");
-		tapeblock_end_request(req, -EIO);
+		blk_end_request_all(req, -EIO);
 		return PTR_ERR(ccw_req);
 	}
 	ccw_req->callback = __tapeblock_end_request;
@@ -131,7 +124,7 @@ tapeblock_start_request(struct tape_device *device, struct request *req)
 		 * Start/enqueueing failed. No retries in
 		 * this case.
 		 */
-		tapeblock_end_request(req, -EIO);
+		blk_end_request_all(req, -EIO);
 		device->discipline->free_bread(ccw_req);
 	}
 
@@ -177,7 +170,7 @@ tapeblock_requeue(struct work_struct *work) {
 			DBF_EVENT(1, "TBLOCK: Rejecting write request\n");
 			blkdev_dequeue_request(req);
 			spin_unlock_irq(&device->blk_data.request_queue_lock);
-			tapeblock_end_request(req, -EIO);
+			blk_end_request_all(req, -EIO);
 			spin_lock_irq(&device->blk_data.request_queue_lock);
 			continue;
 		}
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index b82ffd9..a4e84c6 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1097,7 +1097,7 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes)
 			if (driver_byte(result) & DRIVER_SENSE)
 				scsi_print_sense("", cmd);
 		}
-		blk_end_request(req, -EIO, blk_rq_bytes(req));
+		blk_end_request_all(req, -EIO);
 		scsi_next_command(cmd);
 		break;
 	case ACTION_REPREP:
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index cb2f9ae..6ba7dbf 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -860,6 +860,22 @@ static inline bool blk_end_request(struct request *rq, int error,
 }
 
 /**
+ * blk_end_request_all - Helper function for drives to finish the request.
+ * @rq: the request to finish
+ * @err: %0 for success, < %0 for error
+ *
+ * Description:
+ *     Completely finish @rq.
+ */
+static inline void blk_end_request_all(struct request *rq, int error)
+{
+	bool pending;
+
+	pending = blk_end_request(rq, error, blk_rq_bytes(rq));
+	BUG_ON(pending);
+}
+
+/**
  * __blk_end_request - Helper function for drivers to complete the request.
  * @rq:       the request being processed
  * @error:    %0 for success, < %0 for error
@@ -879,6 +895,22 @@ static inline bool __blk_end_request(struct request *rq, int error,
 }
 
 /**
+ * __blk_end_request_all - Helper function for drives to finish the request.
+ * @rq: the request to finish
+ * @err: %0 for success, < %0 for error
+ *
+ * Description:
+ *     Completely finish @rq.  Must be called with queue lock held.
+ */
+static inline void __blk_end_request_all(struct request *rq, int error)
+{
+	bool pending;
+
+	pending = __blk_end_request(rq, error, blk_rq_bytes(rq));
+	BUG_ON(pending);
+}
+
+/**
  * blk_end_bidi_request - Helper function for drivers to complete bidi request.
  * @rq:         the bidi request being processed
  * @error:      %0 for success, < %0 for error
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 15/17] block: kill end_request()
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
                   ` (13 preceding siblings ...)
  2009-03-16  2:28 ` [PATCH 14/17] block: implement and use [__]blk_end_request_all() Tejun Heo
@ 2009-03-16  2:28 ` Tejun Heo
  2009-03-16  3:23   ` Grant Likely
  2009-03-21  2:58   ` Tejun Heo
  2009-03-16  2:28 ` [PATCH 16/17] ubd: simplify block request completion Tejun Heo
                   ` (2 subsequent siblings)
  17 siblings, 2 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier
  Cc: Tejun Heo, Jörg Dorchain, Geert Uytterhoeven, Tim Waugh,
	Stephen Rothwell, Paul Mackerras, Jeremy Fitzhardinge,
	Grant Likely, Markus Lidel, David Woodhouse, Pete Zaitcev

Impact: kill obsolete interface function

end_request() has been kept around for backward compatibility;
however, it seems to be about time for it to go away.

* There aren't too many users left.

* Its use of @updtodate is pretty confusing.

* In some cases, newer code ends up using mixture of end_request() and
  [__]blk_end_request[_all](), which is way too confusing.

So, kill it.

Most conversions are straightforward.  Noteworthy ones are...

* paride/pcd: next_request() updated to take 0/-errno instead of 1/0.

* paride/pf: pf_end_request() and next_request() updated to take
  0/-errno instead of 1/0.

* xd: xd_readwrite() updated to return 0/-errno instead of 1/0.

* mtd/mtd_blkdevs: blktrans_discard_request() updated to return
  0/-errno instead of 1/0.  Unnecessary local variable res
  initialization removed from mtd_blktrans_thread().

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jörg Dorchain <joerg@dorchain.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Tim Waugh <tim@cyberelk.net>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Grant Likely <grant.likely@secretlab.ca>
Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Pete Zaitcev <zaitcev@redhat.com>
---
 drivers/block/amiflop.c         |   10 +++++-----
 drivers/block/ataflop.c         |   14 +++++++-------
 drivers/block/hd.c              |   14 +++++++-------
 drivers/block/paride/pcd.c      |   12 ++++++------
 drivers/block/paride/pd.c       |    5 +++--
 drivers/block/paride/pf.c       |   28 ++++++++++++++--------------
 drivers/block/ps3disk.c         |    6 +++---
 drivers/block/swim3.c           |   26 +++++++++++++-------------
 drivers/block/xd.c              |   15 ++++++++-------
 drivers/block/xen-blkfront.c    |    2 +-
 drivers/block/xsysace.c         |    4 ++--
 drivers/block/z2ram.c           |    4 ++--
 drivers/cdrom/gdrom.c           |    6 +++---
 drivers/message/i2o/i2o_block.c |    2 +-
 drivers/mtd/mtd_blkdevs.c       |   22 +++++++++++-----------
 drivers/sbus/char/jsflash.c     |    8 ++++----
 include/linux/blkdev.h          |   31 ++-----------------------------
 17 files changed, 92 insertions(+), 117 deletions(-)

diff --git a/drivers/block/amiflop.c b/drivers/block/amiflop.c
index 8df436f..163750e 100644
--- a/drivers/block/amiflop.c
+++ b/drivers/block/amiflop.c
@@ -1359,7 +1359,7 @@ static void redo_fd_request(void)
 #endif
 		block = CURRENT->sector + cnt;
 		if ((int)block > floppy->blocks) {
-			end_request(CURRENT, 0);
+			__blk_end_request_all(CURRENT, -EIO);
 			goto repeat;
 		}
 
@@ -1373,11 +1373,11 @@ static void redo_fd_request(void)
 
 		if ((rq_data_dir(CURRENT) != READ) && (rq_data_dir(CURRENT) != WRITE)) {
 			printk(KERN_WARNING "do_fd_request: unknown command\n");
-			end_request(CURRENT, 0);
+			__blk_end_request_all(CURRENT, -EIO);
 			goto repeat;
 		}
 		if (get_track(drive, track) == -1) {
-			end_request(CURRENT, 0);
+			__blk_end_request_all(CURRENT, -EIO);
 			goto repeat;
 		}
 
@@ -1391,7 +1391,7 @@ static void redo_fd_request(void)
 
 			/* keep the drive spinning while writes are scheduled */
 			if (!fd_motor_on(drive)) {
-				end_request(CURRENT, 0);
+				__blk_end_request_all(CURRENT, -EIO);
 				goto repeat;
 			}
 			/*
@@ -1410,7 +1410,7 @@ static void redo_fd_request(void)
 	CURRENT->nr_sectors -= CURRENT->current_nr_sectors;
 	CURRENT->sector += CURRENT->current_nr_sectors;
 
-	end_request(CURRENT, 1);
+	__blk_end_request_all(CURRENT, 0);
 	goto repeat;
 }
 
diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
index 4234c11..c9844f0 100644
--- a/drivers/block/ataflop.c
+++ b/drivers/block/ataflop.c
@@ -612,7 +612,7 @@ static void fd_error( void )
 	CURRENT->errors++;
 	if (CURRENT->errors >= MAX_ERRORS) {
 		printk(KERN_ERR "fd%d: too many errors.\n", SelectedDrive );
-		end_request(CURRENT, 0);
+		__blk_end_request_all(CURRENT, -EIO);
 	}
 	else if (CURRENT->errors == RECALIBRATE_ERRORS) {
 		printk(KERN_WARNING "fd%d: recalibrating\n", SelectedDrive );
@@ -734,7 +734,7 @@ static void do_fd_action( int drive )
 			/* all sectors finished */
 			CURRENT->nr_sectors -= CURRENT->current_nr_sectors;
 			CURRENT->sector += CURRENT->current_nr_sectors;
-			end_request(CURRENT, 1);
+			__blk_end_request_all(CURRENT, 0);
 			redo_fd_request();
 			return;
 		    }
@@ -1141,7 +1141,7 @@ static void fd_rwsec_done1(int status)
 		/* all sectors finished */
 		CURRENT->nr_sectors -= CURRENT->current_nr_sectors;
 		CURRENT->sector += CURRENT->current_nr_sectors;
-		end_request(CURRENT, 1);
+		__blk_end_request_all(CURRENT, 0);
 		redo_fd_request();
 	}
 	return;
@@ -1414,7 +1414,7 @@ repeat:
 	if (!UD.connected) {
 		/* drive not connected */
 		printk(KERN_ERR "Unknown Device: fd%d\n", drive );
-		end_request(CURRENT, 0);
+		__blk_end_request_all(CURRENT, -EIO);
 		goto repeat;
 	}
 		
@@ -1430,12 +1430,12 @@ repeat:
 		/* user supplied disk type */
 		if (--type >= NUM_DISK_MINORS) {
 			printk(KERN_WARNING "fd%d: invalid disk format", drive );
-			end_request(CURRENT, 0);
+			__blk_end_request_all(CURRENT, -EIO);
 			goto repeat;
 		}
 		if (minor2disktype[type].drive_types > DriveType)  {
 			printk(KERN_WARNING "fd%d: unsupported disk format", drive );
-			end_request(CURRENT, 0);
+			__blk_end_request_all(CURRENT, -EIO);
 			goto repeat;
 		}
 		type = minor2disktype[type].index;
@@ -1445,7 +1445,7 @@ repeat:
 	}
 	
 	if (CURRENT->sector + 1 > UDT->blocks) {
-		end_request(CURRENT, 0);
+		__blk_end_request_all(CURRENT, -EIO);
 		goto repeat;
 	}
 
diff --git a/drivers/block/hd.c b/drivers/block/hd.c
index 482c0c4..3fc066f 100644
--- a/drivers/block/hd.c
+++ b/drivers/block/hd.c
@@ -408,7 +408,7 @@ static void bad_rw_intr(void)
 	if (req != NULL) {
 		struct hd_i_struct *disk = req->rq_disk->private_data;
 		if (++req->errors >= MAX_ERRORS || (hd_error & BBD_ERR)) {
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 			disk->special_op = disk->recalibrate = 1;
 		} else if (req->errors % RESET_FREQ == 0)
 			reset = 1;
@@ -464,7 +464,7 @@ ok_to_read:
 		req->buffer+512);
 #endif
 	if (req->current_nr_sectors <= 0)
-		end_request(req, 1);
+		__blk_end_request_all(req, 0);
 	if (i > 0) {
 		SET_HANDLER(&read_intr);
 		return;
@@ -503,7 +503,7 @@ ok_to_write:
 	--req->current_nr_sectors;
 	req->buffer += 512;
 	if (!i || (req->bio && req->current_nr_sectors <= 0))
-		end_request(req, 1);
+		__blk_end_request_all(req, 0);
 	if (i > 0) {
 		SET_HANDLER(&write_intr);
 		outsw(HD_DATA, req->buffer, 256);
@@ -548,7 +548,7 @@ static void hd_times_out(unsigned long dummy)
 #ifdef DEBUG
 		printk("%s: too many errors\n", name);
 #endif
-		end_request(CURRENT, 0);
+		__blk_end_request_all(CURRENT, -EIO);
 	}
 	local_irq_disable();
 	hd_request();
@@ -564,7 +564,7 @@ static int do_special_op(struct hd_i_struct *disk, struct request *req)
 	}
 	if (disk->head > 16) {
 		printk("%s: cannot handle device with more than 16 heads - giving up\n", req->rq_disk->disk_name);
-		end_request(req, 0);
+		__blk_end_request_all(req, -EIO);
 	}
 	disk->special_op = 0;
 	return 1;
@@ -610,7 +610,7 @@ repeat:
 	    ((block+nsect) > get_capacity(req->rq_disk))) {
 		printk("%s: bad access: block=%d, count=%d\n",
 			req->rq_disk->disk_name, block, nsect);
-		end_request(req, 0);
+		__blk_end_request_all(req, -EIO);
 		goto repeat;
 	}
 
@@ -650,7 +650,7 @@ repeat:
 			break;
 		default:
 			printk("unknown hd-command\n");
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 			break;
 		}
 	}
diff --git a/drivers/block/paride/pcd.c b/drivers/block/paride/pcd.c
index e91d4b4..0ee886c 100644
--- a/drivers/block/paride/pcd.c
+++ b/drivers/block/paride/pcd.c
@@ -735,16 +735,16 @@ static void do_pcd_request(struct request_queue * q)
 			ps_set_intr(do_pcd_read, NULL, 0, nice);
 			return;
 		} else
-			end_request(pcd_req, 0);
+			__blk_end_request_all(pcd_req, -EIO);
 	}
 }
 
-static inline void next_request(int success)
+static inline void next_request(int err)
 {
 	unsigned long saved_flags;
 
 	spin_lock_irqsave(&pcd_lock, saved_flags);
-	end_request(pcd_req, success);
+	__blk_end_request_all(pcd_req, err);
 	pcd_busy = 0;
 	do_pcd_request(pcd_queue);
 	spin_unlock_irqrestore(&pcd_lock, saved_flags);
@@ -781,7 +781,7 @@ static void pcd_start(void)
 
 	if (pcd_command(pcd_current, rd_cmd, 2048, "read block")) {
 		pcd_bufblk = -1;
-		next_request(0);
+		next_request(-EIO);
 		return;
 	}
 
@@ -796,7 +796,7 @@ static void do_pcd_read(void)
 	pcd_retries = 0;
 	pcd_transfer();
 	if (!pcd_count) {
-		next_request(1);
+		next_request(0);
 		return;
 	}
 
@@ -815,7 +815,7 @@ static void do_pcd_read_drq(void)
 			return;
 		}
 		pcd_bufblk = -1;
-		next_request(0);
+		next_request(-EIO);
 		return;
 	}
 
diff --git a/drivers/block/paride/pd.c b/drivers/block/paride/pd.c
index 9299455..1b8f001 100644
--- a/drivers/block/paride/pd.c
+++ b/drivers/block/paride/pd.c
@@ -410,7 +410,8 @@ static void run_fsm(void)
 				pd_claimed = 0;
 				phase = NULL;
 				spin_lock_irqsave(&pd_lock, saved_flags);
-				end_request(pd_req, res);
+				__blk_end_request_all(pd_req,
+						      res == Ok ? 0 : -EIO);
 				pd_req = elv_next_request(pd_queue);
 				if (!pd_req)
 					stop = 1;
@@ -477,7 +478,7 @@ static int pd_next_buf(void)
 	if (pd_count)
 		return 0;
 	spin_lock_irqsave(&pd_lock, saved_flags);
-	end_request(pd_req, 1);
+	__blk_end_request_all(pd_req, 0);
 	pd_count = pd_req->current_nr_sectors;
 	pd_buf = pd_req->buffer;
 	spin_unlock_irqrestore(&pd_lock, saved_flags);
diff --git a/drivers/block/paride/pf.c b/drivers/block/paride/pf.c
index bef3b99..bb51218 100644
--- a/drivers/block/paride/pf.c
+++ b/drivers/block/paride/pf.c
@@ -750,10 +750,10 @@ static int pf_ready(void)
 
 static struct request_queue *pf_queue;
 
-static void pf_end_request(int uptodate)
+static void pf_end_request(int err)
 {
 	if (pf_req) {
-		end_request(pf_req, uptodate);
+		__blk_end_request_all(pf_req, err);
 		pf_req = NULL;
 	}
 }
@@ -773,7 +773,7 @@ repeat:
 	pf_count = pf_req->current_nr_sectors;
 
 	if (pf_block + pf_count > get_capacity(pf_req->rq_disk)) {
-		pf_end_request(0);
+		pf_end_request(-EIO);
 		goto repeat;
 	}
 
@@ -788,7 +788,7 @@ repeat:
 		pi_do_claimed(pf_current->pi, do_pf_write);
 	else {
 		pf_busy = 0;
-		pf_end_request(0);
+		pf_end_request(-EIO);
 		goto repeat;
 	}
 }
@@ -805,7 +805,7 @@ static int pf_next_buf(void)
 		return 1;
 	if (!pf_count) {
 		spin_lock_irqsave(&pf_spin_lock, saved_flags);
-		pf_end_request(1);
+		pf_end_request(0);
 		pf_req = elv_next_request(pf_queue);
 		spin_unlock_irqrestore(&pf_spin_lock, saved_flags);
 		if (!pf_req)
@@ -816,12 +816,12 @@ static int pf_next_buf(void)
 	return 0;
 }
 
-static inline void next_request(int success)
+static inline void next_request(int err)
 {
 	unsigned long saved_flags;
 
 	spin_lock_irqsave(&pf_spin_lock, saved_flags);
-	pf_end_request(success);
+	pf_end_request(err);
 	pf_busy = 0;
 	do_pf_request(pf_queue);
 	spin_unlock_irqrestore(&pf_spin_lock, saved_flags);
@@ -844,7 +844,7 @@ static void do_pf_read_start(void)
 			pi_do_claimed(pf_current->pi, do_pf_read_start);
 			return;
 		}
-		next_request(0);
+		next_request(-EIO);
 		return;
 	}
 	pf_mask = STAT_DRQ;
@@ -863,7 +863,7 @@ static void do_pf_read_drq(void)
 				pi_do_claimed(pf_current->pi, do_pf_read_start);
 				return;
 			}
-			next_request(0);
+			next_request(-EIO);
 			return;
 		}
 		pi_read_block(pf_current->pi, pf_buf, 512);
@@ -871,7 +871,7 @@ static void do_pf_read_drq(void)
 			break;
 	}
 	pi_disconnect(pf_current->pi);
-	next_request(1);
+	next_request(0);
 }
 
 static void do_pf_write(void)
@@ -890,7 +890,7 @@ static void do_pf_write_start(void)
 			pi_do_claimed(pf_current->pi, do_pf_write_start);
 			return;
 		}
-		next_request(0);
+		next_request(-EIO);
 		return;
 	}
 
@@ -903,7 +903,7 @@ static void do_pf_write_start(void)
 				pi_do_claimed(pf_current->pi, do_pf_write_start);
 				return;
 			}
-			next_request(0);
+			next_request(-EIO);
 			return;
 		}
 		pi_write_block(pf_current->pi, pf_buf, 512);
@@ -923,11 +923,11 @@ static void do_pf_write_done(void)
 			pi_do_claimed(pf_current->pi, do_pf_write_start);
 			return;
 		}
-		next_request(0);
+		next_request(-EIO);
 		return;
 	}
 	pi_disconnect(pf_current->pi);
-	next_request(1);
+	next_request(0);
 }
 
 static int __init pf_init(void)
diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
index bccc42b..896d0d1 100644
--- a/drivers/block/ps3disk.c
+++ b/drivers/block/ps3disk.c
@@ -158,7 +158,7 @@ static int ps3disk_submit_request_sg(struct ps3_storage_device *dev,
 	if (res) {
 		dev_err(&dev->sbd.core, "%s:%u: %s failed %d\n", __func__,
 			__LINE__, op, res);
-		end_request(req, 0);
+		__blk_end_request_all(req, -EIO);
 		return 0;
 	}
 
@@ -180,7 +180,7 @@ static int ps3disk_submit_flush_request(struct ps3_storage_device *dev,
 	if (res) {
 		dev_err(&dev->sbd.core, "%s:%u: sync cache failed 0x%llx\n",
 			__func__, __LINE__, res);
-		end_request(req, 0);
+		__blk_end_request_all(req, -EIO);
 		return 0;
 	}
 
@@ -205,7 +205,7 @@ static void ps3disk_do_request(struct ps3_storage_device *dev,
 				break;
 		} else {
 			blk_dump_rq_flags(req, DEVICE_NAME " bad request");
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 			continue;
 		}
 	}
diff --git a/drivers/block/swim3.c b/drivers/block/swim3.c
index 6129653..f661057 100644
--- a/drivers/block/swim3.c
+++ b/drivers/block/swim3.c
@@ -320,15 +320,15 @@ static void start_request(struct floppy_state *fs)
 #endif
 
 		if (req->sector < 0 || req->sector >= fs->total_secs) {
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 			continue;
 		}
 		if (req->current_nr_sectors == 0) {
-			end_request(req, 1);
+			__blk_end_request_all(req, 0);
 			continue;
 		}
 		if (fs->ejected) {
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 			continue;
 		}
 
@@ -336,7 +336,7 @@ static void start_request(struct floppy_state *fs)
 			if (fs->write_prot < 0)
 				fs->write_prot = swim3_readbit(fs, WRITE_PROT);
 			if (fs->write_prot) {
-				end_request(req, 0);
+				__blk_end_request_all(req, -EIO);
 				continue;
 			}
 		}
@@ -508,7 +508,7 @@ static void act(struct floppy_state *fs)
 		case do_transfer:
 			if (fs->cur_cyl != fs->req_cyl) {
 				if (fs->retries > 5) {
-					end_request(fd_req, 0);
+					__blk_end_request_all(fd_req, -EIO);
 					fs->state = idle;
 					return;
 				}
@@ -540,7 +540,7 @@ static void scan_timeout(unsigned long data)
 	out_8(&sw->intr_enable, 0);
 	fs->cur_cyl = -1;
 	if (fs->retries > 5) {
-		end_request(fd_req, 0);
+		__blk_end_request_all(fd_req, -EIO);
 		fs->state = idle;
 		start_request(fs);
 	} else {
@@ -559,7 +559,7 @@ static void seek_timeout(unsigned long data)
 	out_8(&sw->select, RELAX);
 	out_8(&sw->intr_enable, 0);
 	printk(KERN_ERR "swim3: seek timeout\n");
-	end_request(fd_req, 0);
+	__blk_end_request_all(fd_req, -EIO);
 	fs->state = idle;
 	start_request(fs);
 }
@@ -583,7 +583,7 @@ static void settle_timeout(unsigned long data)
 		return;
 	}
 	printk(KERN_ERR "swim3: seek settle timeout\n");
-	end_request(fd_req, 0);
+	__blk_end_request_all(fd_req, -EIO);
 	fs->state = idle;
 	start_request(fs);
 }
@@ -615,7 +615,7 @@ static void xfer_timeout(unsigned long data)
 	fd_req->current_nr_sectors -= s;
 	printk(KERN_ERR "swim3: timeout %sing sector %ld\n",
 	       (rq_data_dir(fd_req)==WRITE? "writ": "read"), (long)fd_req->sector);
-	end_request(fd_req, 0);
+	__blk_end_request_all(fd_req, -EIO);
 	fs->state = idle;
 	start_request(fs);
 }
@@ -646,7 +646,7 @@ static irqreturn_t swim3_interrupt(int irq, void *dev_id)
 				printk(KERN_ERR "swim3: seen sector but cyl=ff?\n");
 				fs->cur_cyl = -1;
 				if (fs->retries > 5) {
-					end_request(fd_req, 0);
+					__blk_end_request_all(fd_req, -EIO);
 					fs->state = idle;
 					start_request(fs);
 				} else {
@@ -731,7 +731,7 @@ static irqreturn_t swim3_interrupt(int irq, void *dev_id)
 				printk("swim3: error %sing block %ld (err=%x)\n",
 				       rq_data_dir(fd_req) == WRITE? "writ": "read",
 				       (long)fd_req->sector, err);
-				end_request(fd_req, 0);
+				__blk_end_request_all(fd_req, -EIO);
 				fs->state = idle;
 			}
 		} else {
@@ -740,7 +740,7 @@ static irqreturn_t swim3_interrupt(int irq, void *dev_id)
 				printk(KERN_ERR "swim3: fd dma: stat=%x resid=%d\n", stat, resid);
 				printk(KERN_ERR "  state=%d, dir=%x, intr=%x, err=%x\n",
 				       fs->state, rq_data_dir(fd_req), intr, err);
-				end_request(fd_req, 0);
+				__blk_end_request_all(fd_req, -EIO);
 				fs->state = idle;
 				start_request(fs);
 				break;
@@ -749,7 +749,7 @@ static irqreturn_t swim3_interrupt(int irq, void *dev_id)
 			fd_req->current_nr_sectors -= fs->scount;
 			fd_req->buffer += fs->scount * 512;
 			if (fd_req->current_nr_sectors <= 0) {
-				end_request(fd_req, 1);
+				__blk_end_request_all(fd_req, 0);
 				fs->state = idle;
 			} else {
 				fs->req_sector += fs->scount;
diff --git a/drivers/block/xd.c b/drivers/block/xd.c
index 64b496f..291ddc3 100644
--- a/drivers/block/xd.c
+++ b/drivers/block/xd.c
@@ -314,21 +314,22 @@ static void do_xd_request (struct request_queue * q)
 		int retry;
 
 		if (!blk_fs_request(req)) {
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 			continue;
 		}
 		if (block + count > get_capacity(req->rq_disk)) {
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 			continue;
 		}
 		if (rw != READ && rw != WRITE) {
 			printk("do_xd_request: unknown request\n");
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 			continue;
 		}
 		for (retry = 0; (retry < XD_RETRIES) && !res; retry++)
 			res = xd_readwrite(rw, disk, req->buffer, block, count);
-		end_request(req, res);	/* wrap up, 0 = fail, 1 = success */
+		/* wrap up, 0 = success, -errno = fail */
+		__blk_end_request_all(req, res);
 	}
 }
 
@@ -418,7 +419,7 @@ static int xd_readwrite (u_char operation,XD_INFO *p,char *buffer,u_int block,u_
 				printk("xd%c: %s timeout, recalibrating drive\n",'a'+drive,(operation == READ ? "read" : "write"));
 				xd_recalibrate(drive);
 				spin_lock_irq(&xd_lock);
-				return (0);
+				return -EIO;
 			case 2:
 				if (sense[0] & 0x30) {
 					printk("xd%c: %s - ",'a'+drive,(operation == READ ? "reading" : "writing"));
@@ -439,7 +440,7 @@ static int xd_readwrite (u_char operation,XD_INFO *p,char *buffer,u_int block,u_
 				else
 					printk(" - no valid disk address\n");
 				spin_lock_irq(&xd_lock);
-				return (0);
+				return -EIO;
 		}
 		if (xd_dma_buffer)
 			for (i=0; i < (temp * 0x200); i++)
@@ -448,7 +449,7 @@ static int xd_readwrite (u_char operation,XD_INFO *p,char *buffer,u_int block,u_
 		count -= temp, buffer += temp * 0x200, block += temp;
 	}
 	spin_lock_irq(&xd_lock);
-	return (1);
+	return 0;
 }
 
 /* xd_recalibrate: recalibrate a given drive and reset controller if necessary */
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index cd6cfe3..01efaaa 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -302,7 +302,7 @@ static void do_blkif_request(struct request_queue *rq)
 	while ((req = elv_next_request(rq)) != NULL) {
 		info = req->rq_disk->private_data;
 		if (!blk_fs_request(req)) {
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 			continue;
 		}
 
diff --git a/drivers/block/xsysace.c b/drivers/block/xsysace.c
index 119be34..d6a3da9 100644
--- a/drivers/block/xsysace.c
+++ b/drivers/block/xsysace.c
@@ -472,7 +472,7 @@ struct request *ace_get_next_request(struct request_queue * q)
 	while ((req = elv_next_request(q)) != NULL) {
 		if (blk_fs_request(req))
 			break;
-		end_request(req, 0);
+		__blk_end_request_all(req, -EIO);
 	}
 	return req;
 }
@@ -500,7 +500,7 @@ static void ace_fsm_dostate(struct ace_device *ace)
 
 		/* Drop all pending requests */
 		while ((req = elv_next_request(ace->queue)) != NULL)
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 
 		/* Drop back to IDLE state and notify waiters */
 		ace->fsm_state = ACE_FSM_STATE_IDLE;
diff --git a/drivers/block/z2ram.c b/drivers/block/z2ram.c
index 80754cd..4172f2c 100644
--- a/drivers/block/z2ram.c
+++ b/drivers/block/z2ram.c
@@ -77,7 +77,7 @@ static void do_z2_request(struct request_queue *q)
 		if (start + len > z2ram_size) {
 			printk( KERN_ERR DEVICE_NAME ": bad access: block=%lu, count=%u\n",
 				req->sector, req->current_nr_sectors);
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 			continue;
 		}
 		while (len) {
@@ -93,7 +93,7 @@ static void do_z2_request(struct request_queue *q)
 			start += size;
 			len -= size;
 		}
-		end_request(req, 1);
+		__blk_end_request_all(req, 0);
 	}
 }
 
diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c
index fee9a9e..c782778 100644
--- a/drivers/cdrom/gdrom.c
+++ b/drivers/cdrom/gdrom.c
@@ -654,17 +654,17 @@ static void gdrom_request(struct request_queue *rq)
 	while ((req = elv_next_request(rq)) != NULL) {
 		if (!blk_fs_request(req)) {
 			printk(KERN_DEBUG "GDROM: Non-fs request ignored\n");
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 		}
 		if (rq_data_dir(req) != READ) {
 			printk(KERN_NOTICE "GDROM: Read only device -");
 			printk(" write request ignored\n");
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 		}
 		if (req->nr_sectors)
 			gdrom_request_handler_dma(req);
 		else
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 	}
 }
 
diff --git a/drivers/message/i2o/i2o_block.c b/drivers/message/i2o/i2o_block.c
index a443e13..3b03eef 100644
--- a/drivers/message/i2o/i2o_block.c
+++ b/drivers/message/i2o/i2o_block.c
@@ -923,7 +923,7 @@ static void i2o_block_request_fn(struct request_queue *q)
 				break;
 			}
 		} else
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 	}
 };
 
diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index 1409f01..461b4a8 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -54,33 +54,33 @@ static int do_blktrans_request(struct mtd_blktrans_ops *tr,
 
 	if (req->cmd_type == REQ_TYPE_LINUX_BLOCK &&
 	    req->cmd[0] == REQ_LB_OP_DISCARD)
-		return !tr->discard(dev, block, nsect);
+		return tr->discard(dev, block, nsect);
 
 	if (!blk_fs_request(req))
-		return 0;
+		return -EIO;
 
 	if (req->sector + req->current_nr_sectors > get_capacity(req->rq_disk))
-		return 0;
+		return -EIO;
 
 	switch(rq_data_dir(req)) {
 	case READ:
 		for (; nsect > 0; nsect--, block++, buf += tr->blksize)
 			if (tr->readsect(dev, block, buf))
-				return 0;
-		return 1;
+				return -EIO;
+		return 0;
 
 	case WRITE:
 		if (!tr->writesect)
-			return 0;
+			return -EIO;
 
 		for (; nsect > 0; nsect--, block++, buf += tr->blksize)
 			if (tr->writesect(dev, block, buf))
-				return 0;
-		return 1;
+				return -EIO;
+		return 0;
 
 	default:
 		printk(KERN_NOTICE "Unknown request %u\n", rq_data_dir(req));
-		return 0;
+		return -EIO;
 	}
 }
 
@@ -96,7 +96,7 @@ static int mtd_blktrans_thread(void *arg)
 	while (!kthread_should_stop()) {
 		struct request *req;
 		struct mtd_blktrans_dev *dev;
-		int res = 0;
+		int res;
 
 		req = elv_next_request(rq);
 
@@ -119,7 +119,7 @@ static int mtd_blktrans_thread(void *arg)
 
 		spin_lock_irq(rq->queue_lock);
 
-		end_request(req, res);
+		__blk_end_request_all(req, res);
 	}
 	spin_unlock_irq(rq->queue_lock);
 
diff --git a/drivers/sbus/char/jsflash.c b/drivers/sbus/char/jsflash.c
index a9a9893..9ef95af 100644
--- a/drivers/sbus/char/jsflash.c
+++ b/drivers/sbus/char/jsflash.c
@@ -195,25 +195,25 @@ static void jsfd_do_request(struct request_queue *q)
 		size_t len = req->current_nr_sectors << 9;
 
 		if ((offset + len) > jdp->dsize) {
-               		end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 			continue;
 		}
 
 		if (rq_data_dir(req) != READ) {
 			printk(KERN_ERR "jsfd: write\n");
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 			continue;
 		}
 
 		if ((jdp->dbase & 0xff000000) != 0x20000000) {
 			printk(KERN_ERR "jsfd: bad base %x\n", (int)jdp->dbase);
-			end_request(req, 0);
+			__blk_end_request_all(req, -EIO);
 			continue;
 		}
 
 		jsfd_read(req->buffer, jdp->dbase + offset, len);
 
-		end_request(req, 1);
+		__blk_end_request_all(req, 0);
 	}
 }
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 6ba7dbf..ec3e855 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -823,9 +823,8 @@ extern unsigned int blk_rq_cur_bytes(struct request *rq);
  * blk_update_request() completes given number of bytes and updates
  * the request without completing it.
  *
- * blk_end_request() and friends.  __blk_end_request() and
- * end_request() must be called with the request queue spinlock
- * acquired.
+ * blk_end_request() and friends.  __blk_end_request() must be called
+ * with the request queue spinlock acquired.
  *
  * Several drivers define their own end_request and call
  * blk_end_request() for parts of the original function.
@@ -931,32 +930,6 @@ static inline bool blk_end_bidi_request(struct request *rq, int error,
 	return __blk_end_io(rq, error, nr_bytes, bidi_bytes, false);
 }
 
-/**
- * end_request - end I/O on the current segment of the request
- * @rq:		the request being processed
- * @uptodate:	error value or %0/%1 uptodate flag
- *
- * Description:
- *     Ends I/O on the current segment of a request. If that is the only
- *     remaining segment, the request is also completed and freed.
- *
- *     This is a remnant of how older block drivers handled I/O completions.
- *     Modern drivers typically end I/O on the full request in one go, unless
- *     they have a residual value to account for. For that case this function
- *     isn't really useful, unless the residual just happens to be the
- *     full current segment. In other words, don't use this function in new
- *     code. Use blk_end_request() or __blk_end_request() to end a request.
- **/
-static inline void end_request(struct request *rq, int uptodate)
-{
-	int error = 0;
-
-	if (uptodate <= 0)
-		error = uptodate ? uptodate : -EIO;
-
-	__blk_end_io(rq, error, rq->hard_cur_sectors << 9, 0, true);
-}
-
 extern void blk_complete_request(struct request *);
 extern void __blk_complete_request(struct request *);
 extern void blk_abort_request(struct request *);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 16/17] ubd: simplify block request completion
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
                   ` (14 preceding siblings ...)
  2009-03-16  2:28 ` [PATCH 15/17] block: kill end_request() Tejun Heo
@ 2009-03-16  2:28 ` Tejun Heo
  2009-03-16  2:29 ` [PATCH 17/17] block: clean up unnecessary stuff from block drivers Tejun Heo
  2009-03-16 17:53 ` [GIT PATCH] block: cleanup patches, take#2 Bartlomiej Zolnierkiewicz
  17 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:28 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier; +Cc: Tejun Heo, Jeff Dike

Impact: cleanup

ubd had its own block request partial completion mechanism, which is
unnecessary as block layer already does it.  Kill ubd_end_request()
and ubd_finish() and replace them with direct call to
blk_end_request().

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jeff Dike <jdike@linux.intel.com>
---
 arch/um/drivers/ubd_kern.c |   23 +----------------------
 1 files changed, 1 insertions(+), 22 deletions(-)

diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
index 0a86811..906ecdf 100644
--- a/arch/um/drivers/ubd_kern.c
+++ b/arch/um/drivers/ubd_kern.c
@@ -451,23 +451,6 @@ static void do_ubd_request(struct request_queue * q);
 
 /* Only changed by ubd_init, which is an initcall. */
 static int thread_fd = -1;
-
-static void ubd_end_request(struct request *req, int bytes, int error)
-{
-	blk_end_request(req, error, bytes);
-}
-
-/* Callable only from interrupt context - otherwise you need to do
- * spin_lock_irq()/spin_lock_irqsave() */
-static inline void ubd_finish(struct request *req, int bytes)
-{
-	if(bytes < 0){
-		ubd_end_request(req, 0, -EIO);
-		return;
-	}
-	ubd_end_request(req, bytes, 0);
-}
-
 static LIST_HEAD(restart);
 
 /* XXX - move this inside ubd_intr. */
@@ -475,7 +458,6 @@ static LIST_HEAD(restart);
 static void ubd_handler(void)
 {
 	struct io_thread_req *req;
-	struct request *rq;
 	struct ubd *ubd;
 	struct list_head *list, *next_ele;
 	unsigned long flags;
@@ -492,10 +474,7 @@ static void ubd_handler(void)
 			return;
 		}
 
-		rq = req->req;
-		rq->nr_sectors -= req->length >> 9;
-		if(rq->nr_sectors == 0)
-			ubd_finish(rq, rq->hard_nr_sectors << 9);
+		blk_end_request(req->req, 0, req->length);
 		kfree(req);
 	}
 	reactivate_fd(thread_fd, UBD_IRQ);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 17/17] block: clean up unnecessary stuff from block drivers
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
                   ` (15 preceding siblings ...)
  2009-03-16  2:28 ` [PATCH 16/17] ubd: simplify block request completion Tejun Heo
@ 2009-03-16  2:29 ` Tejun Heo
  2009-03-16 17:53 ` [GIT PATCH] block: cleanup patches, take#2 Bartlomiej Zolnierkiewicz
  17 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-16  2:29 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier
  Cc: Tejun Heo, Jörg Dorchain, Geert Uytterhoeven

Impact: cleanup

rq_data_dir() can only be READ or WRITE and rq->sector and nr_sectors
are always automatically updated after partial request completion.
Don't worry about rq_data_dir() not being either READ or WRITE or
manually update sector and nr_sectors.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jörg Dorchain <joerg@dorchain.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
---
 drivers/block/amiflop.c |    7 -------
 drivers/block/ataflop.c |    4 ----
 drivers/block/xd.c      |    9 ++-------
 3 files changed, 2 insertions(+), 18 deletions(-)

diff --git a/drivers/block/amiflop.c b/drivers/block/amiflop.c
index 163750e..72ee010 100644
--- a/drivers/block/amiflop.c
+++ b/drivers/block/amiflop.c
@@ -1371,11 +1371,6 @@ static void redo_fd_request(void)
 		       "0x%08lx\n", track, sector, data);
 #endif
 
-		if ((rq_data_dir(CURRENT) != READ) && (rq_data_dir(CURRENT) != WRITE)) {
-			printk(KERN_WARNING "do_fd_request: unknown command\n");
-			__blk_end_request_all(CURRENT, -EIO);
-			goto repeat;
-		}
 		if (get_track(drive, track) == -1) {
 			__blk_end_request_all(CURRENT, -EIO);
 			goto repeat;
@@ -1407,8 +1402,6 @@ static void redo_fd_request(void)
 			break;
 		}
 	}
-	CURRENT->nr_sectors -= CURRENT->current_nr_sectors;
-	CURRENT->sector += CURRENT->current_nr_sectors;
 
 	__blk_end_request_all(CURRENT, 0);
 	goto repeat;
diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
index c9844f0..d19c9d6 100644
--- a/drivers/block/ataflop.c
+++ b/drivers/block/ataflop.c
@@ -732,8 +732,6 @@ static void do_fd_action( int drive )
 		    }
 		    else {
 			/* all sectors finished */
-			CURRENT->nr_sectors -= CURRENT->current_nr_sectors;
-			CURRENT->sector += CURRENT->current_nr_sectors;
 			__blk_end_request_all(CURRENT, 0);
 			redo_fd_request();
 			return;
@@ -1139,8 +1137,6 @@ static void fd_rwsec_done1(int status)
 	}
 	else {
 		/* all sectors finished */
-		CURRENT->nr_sectors -= CURRENT->current_nr_sectors;
-		CURRENT->sector += CURRENT->current_nr_sectors;
 		__blk_end_request_all(CURRENT, 0);
 		redo_fd_request();
 	}
diff --git a/drivers/block/xd.c b/drivers/block/xd.c
index 291ddc3..85d8ef3 100644
--- a/drivers/block/xd.c
+++ b/drivers/block/xd.c
@@ -308,7 +308,6 @@ static void do_xd_request (struct request_queue * q)
 	while ((req = elv_next_request(q)) != NULL) {
 		unsigned block = req->sector;
 		unsigned count = req->nr_sectors;
-		int rw = rq_data_dir(req);
 		XD_INFO *disk = req->rq_disk->private_data;
 		int res = 0;
 		int retry;
@@ -321,13 +320,9 @@ static void do_xd_request (struct request_queue * q)
 			__blk_end_request_all(req, -EIO);
 			continue;
 		}
-		if (rw != READ && rw != WRITE) {
-			printk("do_xd_request: unknown request\n");
-			__blk_end_request_all(req, -EIO);
-			continue;
-		}
 		for (retry = 0; (retry < XD_RETRIES) && !res; retry++)
-			res = xd_readwrite(rw, disk, req->buffer, block, count);
+			res = xd_readwrite(rq_data_dir(req), disk, req->buffer,
+					   block, count);
 		/* wrap up, 0 = success, -errno = fail */
 		__blk_end_request_all(req, res);
 	}
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 15/17] block: kill end_request()
  2009-03-16  2:28 ` [PATCH 15/17] block: kill end_request() Tejun Heo
@ 2009-03-16  3:23   ` Grant Likely
  2009-03-16  3:27     ` Grant Likely
  2009-03-21  2:58   ` Tejun Heo
  1 sibling, 1 reply; 27+ messages in thread
From: Grant Likely @ 2009-03-16  3:23 UTC (permalink / raw)
  To: Tejun Heo
  Cc: axboe, linux-kernel, bzolnier, Jörg Dorchain,
	Geert Uytterhoeven, Tim Waugh, Stephen Rothwell, Paul Mackerras,
	Jeremy Fitzhardinge, Markus Lidel, David Woodhouse, Pete Zaitcev

On Sun, Mar 15, 2009 at 8:28 PM, Tejun Heo <tj@kernel.org> wrote:
> Impact: kill obsolete interface function
>
> end_request() has been kept around for backward compatibility;
> however, it seems to be about time for it to go away.
[...]
> Cc: Grant Likely <grant.likely@secretlab.ca>

I've actually got a conflicting xsysace.c patch queued up.  To avoid
any problems, would it be okay by you if I roll the end_request
changes into my patch and repost it tomorrow?

g.

-- 
Grant Likely, B.Sc., P.Eng.
Secret Lab Technologies Ltd.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 15/17] block: kill end_request()
  2009-03-16  3:23   ` Grant Likely
@ 2009-03-16  3:27     ` Grant Likely
  0 siblings, 0 replies; 27+ messages in thread
From: Grant Likely @ 2009-03-16  3:27 UTC (permalink / raw)
  To: Tejun Heo
  Cc: axboe, linux-kernel, bzolnier, Jörg Dorchain,
	Geert Uytterhoeven, Tim Waugh, Stephen Rothwell, Paul Mackerras,
	Jeremy Fitzhardinge, Markus Lidel, David Woodhouse, Pete Zaitcev

On Sun, Mar 15, 2009 at 9:23 PM, Grant Likely <grant.likely@secretlab.ca> wrote:
> On Sun, Mar 15, 2009 at 8:28 PM, Tejun Heo <tj@kernel.org> wrote:
>> Impact: kill obsolete interface function
>>
>> end_request() has been kept around for backward compatibility;
>> however, it seems to be about time for it to go away.
> [...]
>> Cc: Grant Likely <grant.likely@secretlab.ca>
>
> I've actually got a conflicting xsysace.c patch queued up.  To avoid
> any problems, would it be okay by you if I roll the end_request
> changes into my patch and repost it tomorrow?

Oops, never mind.  I see that your patch is build on top of mine.  Ignore me.

Acked-by: Grant Likely <grant.likely@secretlab.ca>

g.

-- 
Grant Likely, B.Sc., P.Eng.
Secret Lab Technologies Ltd.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [GIT PATCH] block: cleanup patches, take#2
  2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
                   ` (16 preceding siblings ...)
  2009-03-16  2:29 ` [PATCH 17/17] block: clean up unnecessary stuff from block drivers Tejun Heo
@ 2009-03-16 17:53 ` Bartlomiej Zolnierkiewicz
  2009-03-17  0:10   ` Tejun Heo
  17 siblings, 1 reply; 27+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2009-03-16 17:53 UTC (permalink / raw)
  To: Tejun Heo; +Cc: axboe, linux-kernel

On Monday 16 March 2009, Tejun Heo wrote:
> Hello,
> 
> This patchset is available in the following git tree.
> 
>  git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git block-cleanup
> 
> This patchset contains the following 17 cleanup patches.
> 
>  0001-ide-use-blk_run_queue-instead-of-blk_start_queuei.patch
>  0002-ide-don-t-set-REQ_SOFTBARRIER.patch
>  0003-ide-use-blk_update_request-instead-of-blk_end_req.patch
>  0004-block-merge-blk_invoke_request_fn-into-__blk_run_.patch
>  0005-block-kill-blk_start_queueing.patch
>  0006-block-don-t-set-REQ_NOMERGE-unnecessarily.patch
>  0007-block-cleanup-REQ_SOFTBARRIER-usages.patch
>  0008-block-clean-up-misc-stuff-after-block-layer-timeout.patch
>  0009-block-reorder-request-completion-functions.patch
>  0010-block-reorganize-request-fetching-functions.patch
>  0011-block-kill-blk_end_request_callback.patch
>  0012-block-clean-up-request-completion-API.patch
>  0013-block-move-rq-start_time-initialization-to-blk_rq_.patch
>  0014-block-implement-and-use-__-blk_end_request_all.patch
>  0015-block-kill-end_request.patch
>  0016-ubd-simplify-block-request-completion.patch
>  0017-block-clean-up-unnecessary-stuff-from-block-drivers.patch
> 
> It's on top of the current linux-2.6-block/for-2.6.30[1].  Changes
> from the last take[2] are.
> 
> * IDE changes separated out to 0001-0003
> * IDE end_all conversion dropped
> 
> Bartlomiej, 0001-0003 are mostly trivial and shouldn't cause too much
> merge headaches later.  Can these go through block tree?  I'll base

Patches look fine but 0002-0003 will cause pata/block merge conflicts
for linux-next once they go into block tree so no ACK from me for this
approach.

$ patch -p1 --dry-run < 0002.patch
patching file drivers/ide/ide-disk.c
Hunk #1 FAILED at 405.
1 out of 1 hunk FAILED -- saving rejects to file drivers/ide/ide-disk.c.rej
patching file drivers/ide/ide-ioctls.c

$ patch -p1 --dry-run < 0003.patch
patching file drivers/ide/ide-cd.c
Reversed (or previously applied) patch detected!  Assume -R? [n]

Thanks,
Bart

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [GIT PATCH] block: cleanup patches, take#2
  2009-03-16 17:53 ` [GIT PATCH] block: cleanup patches, take#2 Bartlomiej Zolnierkiewicz
@ 2009-03-17  0:10   ` Tejun Heo
  2009-03-18 17:17     ` Bartlomiej Zolnierkiewicz
  0 siblings, 1 reply; 27+ messages in thread
From: Tejun Heo @ 2009-03-17  0:10 UTC (permalink / raw)
  To: Bartlomiej Zolnierkiewicz; +Cc: axboe, linux-kernel

Hello, Bartlomiej.

Bartlomiej Zolnierkiewicz wrote:
> Patches look fine but 0002-0003 will cause pata/block merge conflicts
> for linux-next once they go into block tree so no ACK from me for this
> approach.
> 
> $ patch -p1 --dry-run < 0002.patch
> patching file drivers/ide/ide-disk.c
> Hunk #1 FAILED at 405.
> 1 out of 1 hunk FAILED -- saving rejects to file drivers/ide/ide-disk.c.rej
> patching file drivers/ide/ide-ioctls.c
> 
> $ patch -p1 --dry-run < 0003.patch
> patching file drivers/ide/ide-cd.c
> Reversed (or previously applied) patch detected!  Assume -R? [n]

Heh... for some reason, I think Stephen wouldn't have much problem
merging those conflicts.

I was hoping to push this patchset into 2.6.30.  The thing is that if
you only want to take changes from -linus and don't want to provide
git trees, your tree is kind of blocked from both sides except around
-rc1 window, so if there are multiple related changesets, they either
have to go in one after another during a -rc1 window or they need to
be split over multiple -rc1 windows, either of which isn't gonna work
very well.

Please note that this isn't exactly some overhead which is unduly
weighed on you.  Mid-layer or inter-related API changes often incur
merge conflicts and things get very difficult unless there's some
level of cooperation among related trees.

I understand that you're constrained time and resource-wise and will
be happy to make things easier on your side but options are severely
limited if you don't want to take any changes other than from
upstream.  It would be best if you can maintain IDE changes in a git
tree.  All that you lose are petty controls over change history.  The
tree might look less tidy but it makes things much easier when
multiple trees are involved.  I'll be happy to provide merge commits
between blk and ide at sync points, so that you can pull from them and
don't have to worry about conflicts.  I don't really think it will add
a lot to your workload.

That said, let's postpone this patchset post -rc1 window and see how
things can be worked out then.  Hmmm... I'll move the IDE patches on
top of linux-next/pata-2.6 with other IDE patches.

Jens, please keep reviewing.  I'll keep track of ack status.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [GIT PATCH] block: cleanup patches, take#2
  2009-03-17  0:10   ` Tejun Heo
@ 2009-03-18 17:17     ` Bartlomiej Zolnierkiewicz
  2009-03-19  0:19       ` Tejun Heo
  0 siblings, 1 reply; 27+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2009-03-18 17:17 UTC (permalink / raw)
  To: Tejun Heo; +Cc: axboe, linux-kernel

On Tuesday 17 March 2009, Tejun Heo wrote:
> Hello, Bartlomiej.
> 
> Bartlomiej Zolnierkiewicz wrote:
> > Patches look fine but 0002-0003 will cause pata/block merge conflicts
> > for linux-next once they go into block tree so no ACK from me for this
> > approach.
> > 
> > $ patch -p1 --dry-run < 0002.patch
> > patching file drivers/ide/ide-disk.c
> > Hunk #1 FAILED at 405.
> > 1 out of 1 hunk FAILED -- saving rejects to file drivers/ide/ide-disk.c.rej
> > patching file drivers/ide/ide-ioctls.c
> > 
> > $ patch -p1 --dry-run < 0003.patch
> > patching file drivers/ide/ide-cd.c
> > Reversed (or previously applied) patch detected!  Assume -R? [n]
> 
> Heh... for some reason, I think Stephen wouldn't have much problem
> merging those conflicts.

Well, you can just ask Stephen if he is fine with fixing merge conflicts
for a week or so.  If he agrees fine with me.  I just wouldn't like to see
the _whole_ tree dropped from linux-next because of the last moment block
_cleanup_ patches.

> I was hoping to push this patchset into 2.6.30.  The thing is that if
> you only want to take changes from -linus and don't want to provide
> git trees, your tree is kind of blocked from both sides except around
> -rc1 window, so if there are multiple related changesets, they either
> have to go in one after another during a -rc1 window or they need to
> be split over multiple -rc1 windows, either of which isn't gonna work
> very well.
> 
> Please note that this isn't exactly some overhead which is unduly
> weighed on you.  Mid-layer or inter-related API changes often incur
> merge conflicts and things get very difficult unless there's some
> level of cooperation among related trees.
> 
> I understand that you're constrained time and resource-wise and will
> be happy to make things easier on your side but options are severely
> limited if you don't want to take any changes other than from
> upstream.  It would be best if you can maintain IDE changes in a git
> tree.  All that you lose are petty controls over change history.  The
> tree might look less tidy but it makes things much easier when
> multiple trees are involved.  I'll be happy to provide merge commits

I have been planning on quilt -> git conversion of pata-2.6 tree for some
time now but these merge conflicts happen very seldom (once in 6-12 months)
while the transition period would require quite a lot of time and work...

Anyway point taken.

> between blk and ide at sync points, so that you can pull from them and
> don't have to worry about conflicts.  I don't really think it will add
> a lot to your workload.
> 
> That said, let's postpone this patchset post -rc1 window and see how
> things can be worked out then.  Hmmm... I'll move the IDE patches on
> top of linux-next/pata-2.6 with other IDE patches.

Please do and thanks for understanding.

I think that we can deal with the rest of patches without a problem in the
second week of the merge window so everything will be nicely sorted out by
the time of -rc1.

Thanks.
Bart

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [GIT PATCH] block: cleanup patches, take#2
  2009-03-18 17:17     ` Bartlomiej Zolnierkiewicz
@ 2009-03-19  0:19       ` Tejun Heo
  0 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-19  0:19 UTC (permalink / raw)
  To: Bartlomiej Zolnierkiewicz; +Cc: axboe, linux-kernel

Hello,

Bartlomiej Zolnierkiewicz wrote:
>> Heh... for some reason, I think Stephen wouldn't have much problem
>> merging those conflicts.
> 
> Well, you can just ask Stephen if he is fine with fixing merge conflicts
> for a week or so.  If he agrees fine with me.  I just wouldn't like to see
> the _whole_ tree dropped from linux-next because of the last moment block
> _cleanup_ patches.

Okay, let's postpone them to .31 window.

>> I understand that you're constrained time and resource-wise and will
>> be happy to make things easier on your side but options are severely
>> limited if you don't want to take any changes other than from
>> upstream.  It would be best if you can maintain IDE changes in a git
>> tree.  All that you lose are petty controls over change history.  The
>> tree might look less tidy but it makes things much easier when
>> multiple trees are involved.  I'll be happy to provide merge commits
> 
> I have been planning on quilt -> git conversion of pata-2.6 tree for some
> time now but these merge conflicts happen very seldom (once in 6-12 months)
> while the transition period would require quite a lot of time and work...
> 
> Anyway point taken.

Ah... that sounds great.  Yeah, conversion does take time and effort
to get accustomed to, but I think it will be well worth the while.

>> between blk and ide at sync points, so that you can pull from them and
>> don't have to worry about conflicts.  I don't really think it will add
>> a lot to your workload.
>>
>> That said, let's postpone this patchset post -rc1 window and see how
>> things can be worked out then.  Hmmm... I'll move the IDE patches on
>> top of linux-next/pata-2.6 with other IDE patches.
> 
> Please do and thanks for understanding.
> 
> I think that we can deal with the rest of patches without a problem in the
> second week of the merge window so everything will be nicely sorted out by
> the time of -rc1.

Thanks.  Much appreciated.  I'll send IDE patchset in a few days.

-- 
tejun

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 15/17] block: kill end_request()
  2009-03-16  2:28 ` [PATCH 15/17] block: kill end_request() Tejun Heo
  2009-03-16  3:23   ` Grant Likely
@ 2009-03-21  2:58   ` Tejun Heo
  2009-03-24 11:37     ` Jens Axboe
  1 sibling, 1 reply; 27+ messages in thread
From: Tejun Heo @ 2009-03-21  2:58 UTC (permalink / raw)
  To: axboe, linux-kernel, bzolnier
  Cc: Jörg Dorchain, Geert Uytterhoeven, Tim Waugh,
	Stephen Rothwell, Paul Mackerras, Jeremy Fitzhardinge,
	Grant Likely, Markus Lidel, David Woodhouse, Pete Zaitcev

Tejun Heo wrote:
> Impact: kill obsolete interface function
> 
> end_request() has been kept around for backward compatibility;
> however, it seems to be about time for it to go away.
> 
> * There aren't too many users left.
> 
> * Its use of @updtodate is pretty confusing.
> 
> * In some cases, newer code ends up using mixture of end_request() and
>   [__]blk_end_request[_all](), which is way too confusing.
> 
> So, kill it.
> 
> Most conversions are straightforward.  Noteworthy ones are...
> 
> * paride/pcd: next_request() updated to take 0/-errno instead of 1/0.
> 
> * paride/pf: pf_end_request() and next_request() updated to take
>   0/-errno instead of 1/0.
> 
> * xd: xd_readwrite() updated to return 0/-errno instead of 1/0.
> 
> * mtd/mtd_blkdevs: blktrans_discard_request() updated to return
>   0/-errno instead of 1/0.  Unnecessary local variable res
>   initialization removed from mtd_blktrans_thread().

This patch isn't correct.  Will post updated version later.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 15/17] block: kill end_request()
  2009-03-21  2:58   ` Tejun Heo
@ 2009-03-24 11:37     ` Jens Axboe
  2009-03-24 13:07       ` Tejun Heo
  0 siblings, 1 reply; 27+ messages in thread
From: Jens Axboe @ 2009-03-24 11:37 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, bzolnier, Jörg Dorchain, Geert Uytterhoeven,
	Tim Waugh, Stephen Rothwell, Paul Mackerras, Jeremy Fitzhardinge,
	Grant Likely, Markus Lidel, David Woodhouse, Pete Zaitcev

On Sat, Mar 21 2009, Tejun Heo wrote:
> Tejun Heo wrote:
> > Impact: kill obsolete interface function
> > 
> > end_request() has been kept around for backward compatibility;
> > however, it seems to be about time for it to go away.
> > 
> > * There aren't too many users left.
> > 
> > * Its use of @updtodate is pretty confusing.
> > 
> > * In some cases, newer code ends up using mixture of end_request() and
> >   [__]blk_end_request[_all](), which is way too confusing.
> > 
> > So, kill it.
> > 
> > Most conversions are straightforward.  Noteworthy ones are...
> > 
> > * paride/pcd: next_request() updated to take 0/-errno instead of 1/0.
> > 
> > * paride/pf: pf_end_request() and next_request() updated to take
> >   0/-errno instead of 1/0.
> > 
> > * xd: xd_readwrite() updated to return 0/-errno instead of 1/0.
> > 
> > * mtd/mtd_blkdevs: blktrans_discard_request() updated to return
> >   0/-errno instead of 1/0.  Unnecessary local variable res
> >   initialization removed from mtd_blktrans_thread().
> 
> This patch isn't correct.  Will post updated version later.

The patchset is no longer in the for-2.6.30 upstream branch of the block
git repo, due to the this glitch and the debate on ide conflicts (which
I still find extremely silly and an over reaction).

I trust you will resend it when you are ready!

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 15/17] block: kill end_request()
  2009-03-24 11:37     ` Jens Axboe
@ 2009-03-24 13:07       ` Tejun Heo
  0 siblings, 0 replies; 27+ messages in thread
From: Tejun Heo @ 2009-03-24 13:07 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-kernel, bzolnier, Jörg Dorchain, Geert Uytterhoeven,
	Tim Waugh, Stephen Rothwell, Paul Mackerras, Jeremy Fitzhardinge,
	Grant Likely, Markus Lidel, David Woodhouse, Pete Zaitcev

Hello, Jens.

Jens Axboe wrote:
> The patchset is no longer in the for-2.6.30 upstream branch of the block
> git repo, due to the this glitch and the debate on ide conflicts (which
> I still find extremely silly and an over reaction).
> 
> I trust you will resend it when you are ready!

Yeap, everything is falling into places.  I'm reshuffling patches and
testing stuff.  I'll post refreshed patchsets including IDE updates,
rq->data_len, *nr_sectors consolidation followed by peek/fetch
patchset in a few days.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2009-03-24 13:10 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-03-16  2:28 [GIT PATCH] block: cleanup patches, take#2 Tejun Heo
2009-03-16  2:28 ` [PATCH 01/17] ide: use blk_run_queue() instead of blk_start_queueing() Tejun Heo
2009-03-16  2:28 ` [PATCH 02/17] ide: don't set REQ_SOFTBARRIER Tejun Heo
2009-03-16  2:28 ` [PATCH 03/17] ide: use blk_update_request() instead of blk_end_request_callback() Tejun Heo
2009-03-16  2:28 ` [PATCH 04/17] block: merge blk_invoke_request_fn() into __blk_run_queue() Tejun Heo
2009-03-16  2:28 ` [PATCH 05/17] block: kill blk_start_queueing() Tejun Heo
2009-03-16  2:28 ` [PATCH 06/17] block: don't set REQ_NOMERGE unnecessarily Tejun Heo
2009-03-16  2:28 ` [PATCH 07/17] block: cleanup REQ_SOFTBARRIER usages Tejun Heo
2009-03-16  2:28 ` [PATCH 08/17] block: clean up misc stuff after block layer timeout conversion Tejun Heo
2009-03-16  2:28 ` [PATCH 09/17] block: reorder request completion functions Tejun Heo
2009-03-16  2:28 ` [PATCH 10/17] block: reorganize request fetching functions Tejun Heo
2009-03-16  2:28 ` [PATCH 11/17] block: kill blk_end_request_callback() Tejun Heo
2009-03-16  2:28 ` [PATCH 12/17] block: clean up request completion API Tejun Heo
2009-03-16  2:28 ` [PATCH 13/17] block: move rq->start_time initialization to blk_rq_init() Tejun Heo
2009-03-16  2:28 ` [PATCH 14/17] block: implement and use [__]blk_end_request_all() Tejun Heo
2009-03-16  2:28 ` [PATCH 15/17] block: kill end_request() Tejun Heo
2009-03-16  3:23   ` Grant Likely
2009-03-16  3:27     ` Grant Likely
2009-03-21  2:58   ` Tejun Heo
2009-03-24 11:37     ` Jens Axboe
2009-03-24 13:07       ` Tejun Heo
2009-03-16  2:28 ` [PATCH 16/17] ubd: simplify block request completion Tejun Heo
2009-03-16  2:29 ` [PATCH 17/17] block: clean up unnecessary stuff from block drivers Tejun Heo
2009-03-16 17:53 ` [GIT PATCH] block: cleanup patches, take#2 Bartlomiej Zolnierkiewicz
2009-03-17  0:10   ` Tejun Heo
2009-03-18 17:17     ` Bartlomiej Zolnierkiewicz
2009-03-19  0:19       ` Tejun Heo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.