From: Tejun Heo <tj@kernel.org> To: jaxboe@fusionio.com, linux-fsdevel@vger.kernel.org, linux-scsi@vger.kernel.org, linux-ide@vger.kernel.org, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, hch@lst.de, James. Cc: Tejun Heo <tj@kernel.org>, Christoph Hellwig <hch@infradead.org> Subject: [PATCH 09/11] block: implement REQ_FLUSH/FUA based interface for FLUSH/FUA requests Date: Thu, 12 Aug 2010 14:41:29 +0200 [thread overview] Message-ID: <1281616891-5691-10-git-send-email-tj@kernel.org> (raw) In-Reply-To: <1281616891-5691-1-git-send-email-tj@kernel.org> Now that the backend conversion is complete, export sequenced FLUSH/FUA capability through REQ_FLUSH/FUA flags. REQ_FLUSH means the device cache should be flushed before executing the request. REQ_FUA means that the data in the request should be on non-volatile media on completion. Block layer will choose the correct way of implementing the semantics and execute it. The request may be passed to the device directly if the device can handle it; otherwise, it will be sequenced using one or more proxy requests. Devices will never see REQ_FLUSH and/or FUA which it doesn't support. * QUEUE_ORDERED_* are removed and QUEUE_FSEQ_* are moved into blk-flush.c. * REQ_FLUSH w/o data can also be directly passed to drivers without sequencing but some drivers assume that zero length requests don't have rq->bio which isn't true for these requests requiring the use of proxy requests. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Christoph Hellwig <hch@infradead.org> --- block/blk-core.c | 2 +- block/blk-flush.c | 85 ++++++++++++++++++++++++++---------------------- block/blk.h | 3 ++ include/linux/blkdev.h | 38 +-------------------- 4 files changed, 52 insertions(+), 76 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index efe391b..c00ace2 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1204,7 +1204,7 @@ static int __make_request(struct request_queue *q, struct bio *bio) spin_lock_irq(q->queue_lock); - if (bio->bi_rw & REQ_HARDBARRIER) { + if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) { where = ELEVATOR_INSERT_FRONT; goto get_rq; } diff --git a/block/blk-flush.c b/block/blk-flush.c index dd87322..452c552 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -1,5 +1,5 @@ /* - * Functions related to barrier IO handling + * Functions to sequence FLUSH and FUA writes. */ #include <linux/kernel.h> #include <linux/module.h> @@ -9,6 +9,15 @@ #include "blk.h" +/* FLUSH/FUA sequences */ +enum { + QUEUE_FSEQ_STARTED = (1 << 0), /* flushing in progress */ + QUEUE_FSEQ_PREFLUSH = (1 << 1), /* pre-flushing in progress */ + QUEUE_FSEQ_DATA = (1 << 2), /* data write in progress */ + QUEUE_FSEQ_POSTFLUSH = (1 << 3), /* post-flushing in progress */ + QUEUE_FSEQ_DONE = (1 << 4), +}; + static struct request *queue_next_fseq(struct request_queue *q); unsigned blk_flush_cur_seq(struct request_queue *q) @@ -79,6 +88,7 @@ static void queue_flush(struct request_queue *q, struct request *rq, static struct request *queue_next_fseq(struct request_queue *q) { + struct request *orig_rq = q->orig_flush_rq; struct request *rq = &q->flush_rq; switch (blk_flush_cur_seq(q)) { @@ -87,12 +97,11 @@ static struct request *queue_next_fseq(struct request_queue *q) break; case QUEUE_FSEQ_DATA: - /* initialize proxy request and queue it */ + /* initialize proxy request, inherit FLUSH/FUA and queue it */ blk_rq_init(q, rq); - init_request_from_bio(rq, q->orig_flush_rq->bio); - rq->cmd_flags &= ~REQ_HARDBARRIER; - if (q->ordered & QUEUE_ORDERED_DO_FUA) - rq->cmd_flags |= REQ_FUA; + init_request_from_bio(rq, orig_rq->bio); + rq->cmd_flags &= ~(REQ_FLUSH | REQ_FUA); + rq->cmd_flags |= orig_rq->cmd_flags & (REQ_FLUSH | REQ_FUA); rq->end_io = flush_data_end_io; elv_insert(q, rq, ELEVATOR_INSERT_FRONT); @@ -110,60 +119,58 @@ static struct request *queue_next_fseq(struct request_queue *q) struct request *blk_do_flush(struct request_queue *q, struct request *rq) { + unsigned int fflags = q->flush_flags; /* may change, cache it */ + bool has_flush = fflags & REQ_FLUSH, has_fua = fflags & REQ_FUA; + bool do_preflush = has_flush && (rq->cmd_flags & REQ_FLUSH); + bool do_postflush = has_flush && !has_fua && (rq->cmd_flags & REQ_FUA); unsigned skip = 0; - if (!(rq->cmd_flags & REQ_HARDBARRIER)) + /* + * Special case. If there's data but flush is not necessary, + * the request can be issued directly. + * + * Flush w/o data should be able to be issued directly too but + * currently some drivers assume that rq->bio contains + * non-zero data if it isn't NULL and empty FLUSH requests + * getting here usually have bio's without data. + */ + if (blk_rq_sectors(rq) && !do_preflush && !do_postflush) { + rq->cmd_flags &= ~REQ_FLUSH; + if (!has_fua) + rq->cmd_flags &= ~REQ_FUA; return rq; + } + /* + * Sequenced flushes can't be processed in parallel. If + * another one is already in progress, queue for later + * processing. + */ if (q->flush_seq) { - /* - * Sequenced flush is already in progress and they - * can't be processed in parallel. Queue for later - * processing. - */ list_move_tail(&rq->queuelist, &q->pending_flushes); return NULL; } - if (unlikely(q->next_ordered == QUEUE_ORDERED_NONE)) { - /* - * Queue ordering not supported. Terminate - * with prejudice. - */ - blk_dequeue_request(rq); - __blk_end_request_all(rq, -EOPNOTSUPP); - return NULL; - } - /* * Start a new flush sequence */ q->flush_err = 0; - q->ordered = q->next_ordered; q->flush_seq |= QUEUE_FSEQ_STARTED; - /* - * For an empty barrier, there's no actual BAR request, which - * in turn makes POSTFLUSH unnecessary. Mask them off. - */ - if (!blk_rq_sectors(rq)) - q->ordered &= ~(QUEUE_ORDERED_DO_BAR | - QUEUE_ORDERED_DO_POSTFLUSH); - - /* stash away the original request */ + /* adjust FLUSH/FUA of the original request and stash it away */ + rq->cmd_flags &= ~REQ_FLUSH; + if (!has_fua) + rq->cmd_flags &= ~REQ_FUA; blk_dequeue_request(rq); q->orig_flush_rq = rq; - if (!(q->ordered & QUEUE_ORDERED_DO_PREFLUSH)) + /* skip unneded sequences and return the first one */ + if (!do_preflush) skip |= QUEUE_FSEQ_PREFLUSH; - - if (!(q->ordered & QUEUE_ORDERED_DO_BAR)) + if (!blk_rq_sectors(rq)) skip |= QUEUE_FSEQ_DATA; - - if (!(q->ordered & QUEUE_ORDERED_DO_POSTFLUSH)) + if (!do_postflush) skip |= QUEUE_FSEQ_POSTFLUSH; - - /* complete skipped sequences and return the first sequence */ return blk_flush_complete_seq(q, skip, 0); } diff --git a/block/blk.h b/block/blk.h index 24b92bd..a09c18b 100644 --- a/block/blk.h +++ b/block/blk.h @@ -60,6 +60,9 @@ static inline struct request *__elv_next_request(struct request_queue *q) while (1) { while (!list_empty(&q->queue_head)) { rq = list_entry_rq(q->queue_head.next); + if (!(rq->cmd_flags & (REQ_FLUSH | REQ_FUA)) || + rq == &q->flush_rq) + return rq; rq = blk_do_flush(q, rq); if (rq) return rq; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 87e58f0..5ce0696 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -357,7 +357,6 @@ struct request_queue /* * for flush operations */ - unsigned int ordered, next_ordered; unsigned int flush_flags; unsigned int flush_seq; int flush_err; @@ -464,40 +463,6 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) __clear_bit(flag, &q->queue_flags); } -enum { - /* - * Hardbarrier is supported with one of the following methods. - * - * NONE : hardbarrier unsupported - * DRAIN : ordering by draining is enough - * DRAIN_FLUSH : ordering by draining w/ pre and post flushes - * DRAIN_FUA : ordering by draining w/ pre flush and FUA write - */ - QUEUE_ORDERED_DO_PREFLUSH = 0x10, - QUEUE_ORDERED_DO_BAR = 0x20, - QUEUE_ORDERED_DO_POSTFLUSH = 0x40, - QUEUE_ORDERED_DO_FUA = 0x80, - - QUEUE_ORDERED_NONE = 0x00, - - QUEUE_ORDERED_DRAIN = QUEUE_ORDERED_DO_BAR, - QUEUE_ORDERED_DRAIN_FLUSH = QUEUE_ORDERED_DRAIN | - QUEUE_ORDERED_DO_PREFLUSH | - QUEUE_ORDERED_DO_POSTFLUSH, - QUEUE_ORDERED_DRAIN_FUA = QUEUE_ORDERED_DRAIN | - QUEUE_ORDERED_DO_PREFLUSH | - QUEUE_ORDERED_DO_FUA, - - /* - * FLUSH/FUA sequences. - */ - QUEUE_FSEQ_STARTED = (1 << 0), /* flushing in progress */ - QUEUE_FSEQ_PREFLUSH = (1 << 1), /* pre-flushing in progress */ - QUEUE_FSEQ_DATA = (1 << 2), /* data write in progress */ - QUEUE_FSEQ_POSTFLUSH = (1 << 3), /* post-flushing in progress */ - QUEUE_FSEQ_DONE = (1 << 4), -}; - #define blk_queue_plugged(q) test_bit(QUEUE_FLAG_PLUGGED, &(q)->queue_flags) #define blk_queue_tagged(q) test_bit(QUEUE_FLAG_QUEUED, &(q)->queue_flags) #define blk_queue_stopped(q) test_bit(QUEUE_FLAG_STOPPED, &(q)->queue_flags) @@ -576,7 +541,8 @@ static inline void blk_clear_queue_full(struct request_queue *q, int sync) * it already be started by driver. */ #define RQ_NOMERGE_FLAGS \ - (REQ_NOMERGE | REQ_STARTED | REQ_HARDBARRIER | REQ_SOFTBARRIER) + (REQ_NOMERGE | REQ_STARTED | REQ_HARDBARRIER | REQ_SOFTBARRIER | \ + REQ_FLUSH | REQ_FUA) #define rq_mergeable(rq) \ (!((rq)->cmd_flags & RQ_NOMERGE_FLAGS) && \ (((rq)->cmd_flags & REQ_DISCARD) || \ -- 1.7.1
WARNING: multiple messages have this Message-ID (diff)
From: Tejun Heo <tj@kernel.org> To: jaxboe@fusionio.com, linux-fsdevel@vger.kernel.org, linux-scsi@vger.kernel.org, linux-ide@vger.kernel.org, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, hch@lst.de, James.Bottomley@suse.de, tytso@mit.edu, chris.mason@oracle.com, swhiteho@redhat.com, konishi.ryusuke@lab.ntt.co.jp, dm-devel@redhat.com, vst@vlnb.net, jack@suse.cz, rwheeler@redhat.com, hare@suse.de Cc: Tejun Heo <tj@kernel.org>, Christoph Hellwig <hch@infradead.org> Subject: [PATCH 09/11] block: implement REQ_FLUSH/FUA based interface for FLUSH/FUA requests Date: Thu, 12 Aug 2010 14:41:29 +0200 [thread overview] Message-ID: <1281616891-5691-10-git-send-email-tj@kernel.org> (raw) In-Reply-To: <1281616891-5691-1-git-send-email-tj@kernel.org> Now that the backend conversion is complete, export sequenced FLUSH/FUA capability through REQ_FLUSH/FUA flags. REQ_FLUSH means the device cache should be flushed before executing the request. REQ_FUA means that the data in the request should be on non-volatile media on completion. Block layer will choose the correct way of implementing the semantics and execute it. The request may be passed to the device directly if the device can handle it; otherwise, it will be sequenced using one or more proxy requests. Devices will never see REQ_FLUSH and/or FUA which it doesn't support. * QUEUE_ORDERED_* are removed and QUEUE_FSEQ_* are moved into blk-flush.c. * REQ_FLUSH w/o data can also be directly passed to drivers without sequencing but some drivers assume that zero length requests don't have rq->bio which isn't true for these requests requiring the use of proxy requests. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Christoph Hellwig <hch@infradead.org> --- block/blk-core.c | 2 +- block/blk-flush.c | 85 ++++++++++++++++++++++++++---------------------- block/blk.h | 3 ++ include/linux/blkdev.h | 38 +-------------------- 4 files changed, 52 insertions(+), 76 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index efe391b..c00ace2 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1204,7 +1204,7 @@ static int __make_request(struct request_queue *q, struct bio *bio) spin_lock_irq(q->queue_lock); - if (bio->bi_rw & REQ_HARDBARRIER) { + if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) { where = ELEVATOR_INSERT_FRONT; goto get_rq; } diff --git a/block/blk-flush.c b/block/blk-flush.c index dd87322..452c552 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -1,5 +1,5 @@ /* - * Functions related to barrier IO handling + * Functions to sequence FLUSH and FUA writes. */ #include <linux/kernel.h> #include <linux/module.h> @@ -9,6 +9,15 @@ #include "blk.h" +/* FLUSH/FUA sequences */ +enum { + QUEUE_FSEQ_STARTED = (1 << 0), /* flushing in progress */ + QUEUE_FSEQ_PREFLUSH = (1 << 1), /* pre-flushing in progress */ + QUEUE_FSEQ_DATA = (1 << 2), /* data write in progress */ + QUEUE_FSEQ_POSTFLUSH = (1 << 3), /* post-flushing in progress */ + QUEUE_FSEQ_DONE = (1 << 4), +}; + static struct request *queue_next_fseq(struct request_queue *q); unsigned blk_flush_cur_seq(struct request_queue *q) @@ -79,6 +88,7 @@ static void queue_flush(struct request_queue *q, struct request *rq, static struct request *queue_next_fseq(struct request_queue *q) { + struct request *orig_rq = q->orig_flush_rq; struct request *rq = &q->flush_rq; switch (blk_flush_cur_seq(q)) { @@ -87,12 +97,11 @@ static struct request *queue_next_fseq(struct request_queue *q) break; case QUEUE_FSEQ_DATA: - /* initialize proxy request and queue it */ + /* initialize proxy request, inherit FLUSH/FUA and queue it */ blk_rq_init(q, rq); - init_request_from_bio(rq, q->orig_flush_rq->bio); - rq->cmd_flags &= ~REQ_HARDBARRIER; - if (q->ordered & QUEUE_ORDERED_DO_FUA) - rq->cmd_flags |= REQ_FUA; + init_request_from_bio(rq, orig_rq->bio); + rq->cmd_flags &= ~(REQ_FLUSH | REQ_FUA); + rq->cmd_flags |= orig_rq->cmd_flags & (REQ_FLUSH | REQ_FUA); rq->end_io = flush_data_end_io; elv_insert(q, rq, ELEVATOR_INSERT_FRONT); @@ -110,60 +119,58 @@ static struct request *queue_next_fseq(struct request_queue *q) struct request *blk_do_flush(struct request_queue *q, struct request *rq) { + unsigned int fflags = q->flush_flags; /* may change, cache it */ + bool has_flush = fflags & REQ_FLUSH, has_fua = fflags & REQ_FUA; + bool do_preflush = has_flush && (rq->cmd_flags & REQ_FLUSH); + bool do_postflush = has_flush && !has_fua && (rq->cmd_flags & REQ_FUA); unsigned skip = 0; - if (!(rq->cmd_flags & REQ_HARDBARRIER)) + /* + * Special case. If there's data but flush is not necessary, + * the request can be issued directly. + * + * Flush w/o data should be able to be issued directly too but + * currently some drivers assume that rq->bio contains + * non-zero data if it isn't NULL and empty FLUSH requests + * getting here usually have bio's without data. + */ + if (blk_rq_sectors(rq) && !do_preflush && !do_postflush) { + rq->cmd_flags &= ~REQ_FLUSH; + if (!has_fua) + rq->cmd_flags &= ~REQ_FUA; return rq; + } + /* + * Sequenced flushes can't be processed in parallel. If + * another one is already in progress, queue for later + * processing. + */ if (q->flush_seq) { - /* - * Sequenced flush is already in progress and they - * can't be processed in parallel. Queue for later - * processing. - */ list_move_tail(&rq->queuelist, &q->pending_flushes); return NULL; } - if (unlikely(q->next_ordered == QUEUE_ORDERED_NONE)) { - /* - * Queue ordering not supported. Terminate - * with prejudice. - */ - blk_dequeue_request(rq); - __blk_end_request_all(rq, -EOPNOTSUPP); - return NULL; - } - /* * Start a new flush sequence */ q->flush_err = 0; - q->ordered = q->next_ordered; q->flush_seq |= QUEUE_FSEQ_STARTED; - /* - * For an empty barrier, there's no actual BAR request, which - * in turn makes POSTFLUSH unnecessary. Mask them off. - */ - if (!blk_rq_sectors(rq)) - q->ordered &= ~(QUEUE_ORDERED_DO_BAR | - QUEUE_ORDERED_DO_POSTFLUSH); - - /* stash away the original request */ + /* adjust FLUSH/FUA of the original request and stash it away */ + rq->cmd_flags &= ~REQ_FLUSH; + if (!has_fua) + rq->cmd_flags &= ~REQ_FUA; blk_dequeue_request(rq); q->orig_flush_rq = rq; - if (!(q->ordered & QUEUE_ORDERED_DO_PREFLUSH)) + /* skip unneded sequences and return the first one */ + if (!do_preflush) skip |= QUEUE_FSEQ_PREFLUSH; - - if (!(q->ordered & QUEUE_ORDERED_DO_BAR)) + if (!blk_rq_sectors(rq)) skip |= QUEUE_FSEQ_DATA; - - if (!(q->ordered & QUEUE_ORDERED_DO_POSTFLUSH)) + if (!do_postflush) skip |= QUEUE_FSEQ_POSTFLUSH; - - /* complete skipped sequences and return the first sequence */ return blk_flush_complete_seq(q, skip, 0); } diff --git a/block/blk.h b/block/blk.h index 24b92bd..a09c18b 100644 --- a/block/blk.h +++ b/block/blk.h @@ -60,6 +60,9 @@ static inline struct request *__elv_next_request(struct request_queue *q) while (1) { while (!list_empty(&q->queue_head)) { rq = list_entry_rq(q->queue_head.next); + if (!(rq->cmd_flags & (REQ_FLUSH | REQ_FUA)) || + rq == &q->flush_rq) + return rq; rq = blk_do_flush(q, rq); if (rq) return rq; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 87e58f0..5ce0696 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -357,7 +357,6 @@ struct request_queue /* * for flush operations */ - unsigned int ordered, next_ordered; unsigned int flush_flags; unsigned int flush_seq; int flush_err; @@ -464,40 +463,6 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) __clear_bit(flag, &q->queue_flags); } -enum { - /* - * Hardbarrier is supported with one of the following methods. - * - * NONE : hardbarrier unsupported - * DRAIN : ordering by draining is enough - * DRAIN_FLUSH : ordering by draining w/ pre and post flushes - * DRAIN_FUA : ordering by draining w/ pre flush and FUA write - */ - QUEUE_ORDERED_DO_PREFLUSH = 0x10, - QUEUE_ORDERED_DO_BAR = 0x20, - QUEUE_ORDERED_DO_POSTFLUSH = 0x40, - QUEUE_ORDERED_DO_FUA = 0x80, - - QUEUE_ORDERED_NONE = 0x00, - - QUEUE_ORDERED_DRAIN = QUEUE_ORDERED_DO_BAR, - QUEUE_ORDERED_DRAIN_FLUSH = QUEUE_ORDERED_DRAIN | - QUEUE_ORDERED_DO_PREFLUSH | - QUEUE_ORDERED_DO_POSTFLUSH, - QUEUE_ORDERED_DRAIN_FUA = QUEUE_ORDERED_DRAIN | - QUEUE_ORDERED_DO_PREFLUSH | - QUEUE_ORDERED_DO_FUA, - - /* - * FLUSH/FUA sequences. - */ - QUEUE_FSEQ_STARTED = (1 << 0), /* flushing in progress */ - QUEUE_FSEQ_PREFLUSH = (1 << 1), /* pre-flushing in progress */ - QUEUE_FSEQ_DATA = (1 << 2), /* data write in progress */ - QUEUE_FSEQ_POSTFLUSH = (1 << 3), /* post-flushing in progress */ - QUEUE_FSEQ_DONE = (1 << 4), -}; - #define blk_queue_plugged(q) test_bit(QUEUE_FLAG_PLUGGED, &(q)->queue_flags) #define blk_queue_tagged(q) test_bit(QUEUE_FLAG_QUEUED, &(q)->queue_flags) #define blk_queue_stopped(q) test_bit(QUEUE_FLAG_STOPPED, &(q)->queue_flags) @@ -576,7 +541,8 @@ static inline void blk_clear_queue_full(struct request_queue *q, int sync) * it already be started by driver. */ #define RQ_NOMERGE_FLAGS \ - (REQ_NOMERGE | REQ_STARTED | REQ_HARDBARRIER | REQ_SOFTBARRIER) + (REQ_NOMERGE | REQ_STARTED | REQ_HARDBARRIER | REQ_SOFTBARRIER | \ + REQ_FLUSH | REQ_FUA) #define rq_mergeable(rq) \ (!((rq)->cmd_flags & RQ_NOMERGE_FLAGS) && \ (((rq)->cmd_flags & REQ_DISCARD) || \ -- 1.7.1
next prev parent reply other threads:[~2010-08-12 12:41 UTC|newest] Thread overview: 155+ messages / expand[flat|nested] mbox.gz Atom feed top 2010-08-12 12:41 [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-12 12:41 ` [PATCH 01/11] block/loop: queue ordered mode should be DRAIN_FLUSH Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-12 12:41 ` [PATCH 02/11] block: kill QUEUE_ORDERED_BY_TAG Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-13 12:56 ` Vladislav Bolkhovitin 2010-08-13 13:06 ` Christoph Hellwig 2010-08-12 12:41 ` [PATCH 03/11] block: deprecate barrier and replace blk_queue_ordered() with blk_queue_flush() Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-14 1:07 ` Jeremy Fitzhardinge 2010-08-14 1:07 ` Jeremy Fitzhardinge 2010-08-14 9:42 ` hch 2010-08-16 20:38 ` Jeremy Fitzhardinge 2010-08-12 12:41 ` [PATCH 04/11] block: remove spurious uses of REQ_HARDBARRIER Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-12 12:41 ` [PATCH 05/11] block: misc cleanups in barrier code Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-12 12:41 ` [PATCH 06/11] block: drop barrier ordering by queue draining Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-12 12:41 ` [PATCH 07/11] block: rename blk-barrier.c to blk-flush.c Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-12 12:41 ` [PATCH 08/11] block: rename barrier/ordered to flush Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-17 13:26 ` Christoph Hellwig 2010-08-17 16:23 ` Tejun Heo 2010-08-17 17:08 ` Christoph Hellwig 2010-08-18 6:23 ` Tejun Heo 2010-08-12 12:41 ` [PATCH 09/11] block: implement REQ_FLUSH/FUA based interface for FLUSH/FUA requests Tejun Heo 2010-08-12 12:41 ` Tejun Heo [this message] 2010-08-12 12:41 ` Tejun Heo 2010-08-12 12:41 ` [PATCH 10/11] fs, block: propagate REQ_FLUSH/FUA interface to upper layers Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-12 21:24 ` Jan Kara 2010-08-13 7:19 ` Tejun Heo 2010-08-13 7:47 ` Christoph Hellwig 2010-08-16 16:33 ` [PATCH UPDATED " Tejun Heo 2010-08-12 12:41 ` [PATCH " Tejun Heo 2010-08-12 12:41 ` [PATCH 11/11] block: use REQ_FLUSH in blkdev_issue_flush() Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-12 12:41 ` Tejun Heo 2010-08-13 11:48 ` [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush Christoph Hellwig 2010-08-13 13:48 ` Tejun Heo 2010-08-13 14:38 ` Christoph Hellwig 2010-08-13 14:51 ` Tejun Heo 2010-08-14 10:36 ` Christoph Hellwig 2010-08-17 9:59 ` Tejun Heo 2010-08-17 13:19 ` Christoph Hellwig 2010-08-17 16:41 ` Tejun Heo 2010-08-17 16:59 ` Christoph Hellwig 2010-08-18 6:35 ` Tejun Heo 2010-08-18 8:11 ` Tejun Heo 2010-08-20 8:26 ` Kiyoshi Ueda 2010-08-23 12:14 ` Tejun Heo 2010-08-23 14:17 ` Mike Snitzer 2010-08-24 10:24 ` Kiyoshi Ueda 2010-08-24 16:59 ` Tejun Heo 2010-08-24 17:52 ` Mike Snitzer 2010-08-24 18:14 ` Tejun Heo 2010-08-25 8:00 ` Kiyoshi Ueda 2010-08-25 15:28 ` Mike Snitzer 2010-08-27 9:47 ` Kiyoshi Ueda 2010-08-27 9:47 ` Kiyoshi Ueda 2010-08-27 13:49 ` Mike Snitzer 2010-08-30 6:13 ` Kiyoshi Ueda 2010-09-01 0:55 ` safety of retrying SYNCHRONIZE CACHE [was: Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush] Mike Snitzer 2010-09-01 7:32 ` Hannes Reinecke 2010-09-01 7:32 ` Hannes Reinecke 2010-09-01 7:38 ` Hannes Reinecke 2010-09-01 7:38 ` Hannes Reinecke 2010-12-08 21:14 ` [PATCH] scsi: improve description for deferred error Mike Snitzer 2010-12-28 21:45 ` Brett Russ 2010-08-25 15:59 ` [RFC] training mpath to discern between SCSI errors (was: Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush) Mike Snitzer 2010-08-25 19:15 ` [RFC] training mpath to discern between SCSI errors Mike Christie 2010-08-30 11:38 ` Hannes Reinecke 2010-08-30 12:07 ` Sergei Shtylyov 2010-08-30 12:39 ` Hannes Reinecke 2010-08-30 12:51 ` Christophe Varoqui 2010-08-30 13:10 ` Hannes Reinecke 2010-08-30 14:52 ` [dm-devel] " Hannes Reinecke 2010-08-30 14:52 ` Hannes Reinecke 2010-10-18 8:09 ` Jun'ichi Nomura 2010-10-18 11:55 ` Hannes Reinecke 2010-10-19 4:03 ` Jun'ichi Nomura 2010-11-19 3:11 ` [dm-devel] " Malahal Naineni 2010-11-30 22:59 ` Mike Snitzer 2010-12-07 23:16 ` [RFC PATCH 0/3] differentiate between I/O errors Mike Snitzer 2010-12-07 23:16 ` [RFC PATCH v2 1/3] scsi: Detailed " Mike Snitzer 2010-12-07 23:16 ` [RFC PATCH v2 2/3] dm mpath: propagate target errors immediately Mike Snitzer 2010-12-07 23:16 ` Mike Snitzer 2010-12-07 23:16 ` [RFC PATCH 3/3] block: improve detail in I/O error messages Mike Snitzer 2010-12-08 11:28 ` Sergei Shtylyov 2010-12-08 15:05 ` [PATCH v2 " Mike Snitzer 2010-12-10 23:40 ` [RFC PATCH 0/3] differentiate between I/O errors Malahal Naineni 2011-01-14 1:15 ` Mike Snitzer 2011-01-14 1:15 ` Mike Snitzer 2011-01-14 1:15 ` Mike Snitzer 2011-01-14 1:15 ` Mike Snitzer 2011-01-14 1:15 ` Mike Snitzer 2010-12-17 9:47 ` training mpath to discern between SCSI errors Hannes Reinecke 2010-12-17 14:06 ` Mike Snitzer 2010-12-17 14:06 ` Mike Snitzer 2011-01-14 1:09 ` Mike Snitzer 2011-01-14 7:45 ` Hannes Reinecke 2011-01-14 13:59 ` Mike Snitzer 2010-08-24 17:11 ` [PATCHSET block#for-2.6.36-post] block: replace barrier with sequenced flush Vladislav Bolkhovitin 2010-08-24 23:14 ` Alan Cox 2010-08-24 23:14 ` Alan Cox 2010-08-13 12:55 ` Vladislav Bolkhovitin 2010-08-13 13:17 ` Christoph Hellwig 2010-08-18 19:29 ` Vladislav Bolkhovitin 2010-08-13 13:21 ` Tejun Heo 2010-08-18 19:30 ` Vladislav Bolkhovitin 2010-08-19 9:51 ` Tejun Heo 2010-08-30 9:54 ` Hannes Reinecke 2010-08-30 20:34 ` Vladislav Bolkhovitin 2010-08-18 9:46 ` Christoph Hellwig 2010-08-19 9:57 ` Tejun Heo 2010-08-19 10:20 ` Christoph Hellwig 2010-08-19 10:22 ` Tejun Heo 2010-08-20 13:22 ` Christoph Hellwig 2010-08-20 15:18 ` Ric Wheeler 2010-08-20 16:00 ` Chris Mason 2010-08-20 16:02 ` Ric Wheeler 2010-08-20 16:02 ` Ric Wheeler 2010-08-20 16:02 ` Ric Wheeler 2010-08-20 16:02 ` Ric Wheeler 2010-08-20 16:02 ` Ric Wheeler 2010-08-20 16:02 ` Ric Wheeler 2010-08-23 12:30 ` Tejun Heo 2010-08-23 12:48 ` Christoph Hellwig 2010-08-23 13:58 ` Ric Wheeler 2010-08-23 14:01 ` Jens Axboe 2010-08-23 14:08 ` Christoph Hellwig 2010-08-23 14:13 ` Tejun Heo 2010-08-23 14:19 ` Christoph Hellwig 2010-08-25 11:31 ` Jens Axboe 2010-08-30 10:04 ` Hannes Reinecke 2010-08-23 15:19 ` Ric Wheeler 2010-08-23 16:45 ` Sergey Vlasov 2010-08-23 16:45 ` [dm-devel] " Sergey Vlasov 2010-08-23 16:49 ` Ric Wheeler 2010-08-23 16:49 ` Ric Wheeler 2010-08-23 16:49 ` Ric Wheeler 2010-08-23 16:49 ` Ric Wheeler 2010-08-23 16:49 ` Ric Wheeler 2010-08-23 12:36 ` Tejun Heo 2010-08-23 14:05 ` Christoph Hellwig 2010-08-23 14:15 ` [PATCH] block: simplify queue_next_fseq Christoph Hellwig 2010-08-23 16:28 ` OT grammar nit " John Robinson
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1281616891-5691-10-git-send-email-tj@kernel.org \ --to=tj@kernel.org \ --cc=hch@infradead.org \ --cc=hch@lst.de \ --cc=jaxboe@fusionio.com \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-ide@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-raid@vger.kernel.org \ --cc=linux-scsi@vger.kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.