* [PATCH 0/3] blk-mq: driver tag related cleanup @ 2020-06-30 2:23 Ming Lei 2020-06-30 2:23 ` [PATCH 1/3] blk-mq: move blk_mq_get_driver_tag into blk-mq.c Ming Lei ` (2 more replies) 0 siblings, 3 replies; 12+ messages in thread From: Ming Lei @ 2020-06-30 2:23 UTC (permalink / raw) To: Jens Axboe; +Cc: linux-block, Ming Lei, Christoph Hellwig Hi Jens, The 1st & 2nd patch moves get/put driver tag helpers into blk-mq.c, and the 3rd patch centralise related handling into blk_mq_get_driver_tag, so both flush & blk-mq code get simplified. Ming Lei (3): blk-mq: move blk_mq_get_driver_tag into blk-mq.c blk-mq: move blk_mq_put_driver_tag() into blk-mq.c blk-mq: centralise related handling into blk_mq_get_driver_tag block/blk-flush.c | 13 +++------ block/blk-mq-tag.c | 58 -------------------------------------- block/blk-mq-tag.h | 41 +++++++++++++++++---------- block/blk-mq.c | 69 ++++++++++++++++++++++++++++++++++++++-------- block/blk-mq.h | 20 -------------- block/blk.h | 5 ---- 6 files changed, 88 insertions(+), 118 deletions(-) Cc: Christoph Hellwig <hch@infradead.org> -- 2.25.2 ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 1/3] blk-mq: move blk_mq_get_driver_tag into blk-mq.c 2020-06-30 2:23 [PATCH 0/3] blk-mq: driver tag related cleanup Ming Lei @ 2020-06-30 2:23 ` Ming Lei 2020-06-30 4:57 ` Christoph Hellwig 2020-06-30 6:10 ` Hannes Reinecke 2020-06-30 2:23 ` [PATCH 2/3] blk-mq: move blk_mq_put_driver_tag() " Ming Lei 2020-06-30 2:23 ` [PATCH 3/3] blk-mq: centralise related handling into blk_mq_get_driver_tag Ming Lei 2 siblings, 2 replies; 12+ messages in thread From: Ming Lei @ 2020-06-30 2:23 UTC (permalink / raw) To: Jens Axboe; +Cc: linux-block, Ming Lei, Christoph Hellwig blk_mq_get_driver_tag() is only used by blk-mq.c and is supposed to stay in blk-mq.c, so move it and preparing for cleanup code of get/put driver tag. Meantime hctx_may_queue() is moved to header file and it is fine since it is defined as inline always. No functional change. Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Ming Lei <ming.lei@redhat.com> --- block/blk-mq-tag.c | 58 ---------------------------------------------- block/blk-mq-tag.h | 39 ++++++++++++++++++++++++------- block/blk-mq.c | 34 +++++++++++++++++++++++++++ 3 files changed, 65 insertions(+), 66 deletions(-) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index ae722f8b13fb..d54890b1a44e 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -56,37 +56,6 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx) blk_mq_tag_wakeup_all(tags, false); } -/* - * For shared tag users, we track the number of currently active users - * and attempt to provide a fair share of the tag depth for each of them. - */ -static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx, - struct sbitmap_queue *bt) -{ - unsigned int depth, users; - - if (!hctx || !(hctx->flags & BLK_MQ_F_TAG_SHARED)) - return true; - if (!test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state)) - return true; - - /* - * Don't try dividing an ant - */ - if (bt->sb.depth == 1) - return true; - - users = atomic_read(&hctx->tags->active_queues); - if (!users) - return true; - - /* - * Allow at least some tags - */ - depth = max((bt->sb.depth + users - 1) / users, 4U); - return atomic_read(&hctx->nr_active) < depth; -} - static int __blk_mq_get_tag(struct blk_mq_alloc_data *data, struct sbitmap_queue *bt) { @@ -191,33 +160,6 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) return tag + tag_offset; } -bool __blk_mq_get_driver_tag(struct request *rq) -{ - struct sbitmap_queue *bt = &rq->mq_hctx->tags->bitmap_tags; - unsigned int tag_offset = rq->mq_hctx->tags->nr_reserved_tags; - bool shared = blk_mq_tag_busy(rq->mq_hctx); - int tag; - - if (blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, rq->internal_tag)) { - bt = &rq->mq_hctx->tags->breserved_tags; - tag_offset = 0; - } - - if (!hctx_may_queue(rq->mq_hctx, bt)) - return false; - tag = __sbitmap_queue_get(bt); - if (tag == BLK_MQ_NO_TAG) - return false; - - rq->tag = tag + tag_offset; - if (shared) { - rq->rq_flags |= RQF_MQ_INFLIGHT; - atomic_inc(&rq->mq_hctx->nr_active); - } - rq->mq_hctx->tags->rqs[rq->tag] = rq; - return true; -} - void blk_mq_put_tag(struct blk_mq_tags *tags, struct blk_mq_ctx *ctx, unsigned int tag) { diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h index 2e4ef51cdb32..3945c7f5b944 100644 --- a/block/blk-mq-tag.h +++ b/block/blk-mq-tag.h @@ -51,14 +51,6 @@ enum { BLK_MQ_TAG_MAX = BLK_MQ_NO_TAG - 1, }; -bool __blk_mq_get_driver_tag(struct request *rq); -static inline bool blk_mq_get_driver_tag(struct request *rq) -{ - if (rq->tag != BLK_MQ_NO_TAG) - return true; - return __blk_mq_get_driver_tag(rq); -} - extern bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *); extern void __blk_mq_tag_idle(struct blk_mq_hw_ctx *); @@ -78,6 +70,37 @@ static inline void blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx) __blk_mq_tag_idle(hctx); } +/* + * For shared tag users, we track the number of currently active users + * and attempt to provide a fair share of the tag depth for each of them. + */ +static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx, + struct sbitmap_queue *bt) +{ + unsigned int depth, users; + + if (!hctx || !(hctx->flags & BLK_MQ_F_TAG_SHARED)) + return true; + if (!test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state)) + return true; + + /* + * Don't try dividing an ant + */ + if (bt->sb.depth == 1) + return true; + + users = atomic_read(&hctx->tags->active_queues); + if (!users) + return true; + + /* + * Allow at least some tags + */ + depth = max((bt->sb.depth + users - 1) / users, 4U); + return atomic_read(&hctx->nr_active) < depth; +} + /* * This helper should only be used for flush request to share tag * with the request cloned from, and both the two requests can't be diff --git a/block/blk-mq.c b/block/blk-mq.c index d07e55455726..0438bf388fde 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1107,6 +1107,40 @@ static inline unsigned int queued_to_index(unsigned int queued) return min(BLK_MQ_MAX_DISPATCH_ORDER - 1, ilog2(queued) + 1); } +static bool __blk_mq_get_driver_tag(struct request *rq) +{ + struct sbitmap_queue *bt = &rq->mq_hctx->tags->bitmap_tags; + unsigned int tag_offset = rq->mq_hctx->tags->nr_reserved_tags; + bool shared = blk_mq_tag_busy(rq->mq_hctx); + int tag; + + if (blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, rq->internal_tag)) { + bt = &rq->mq_hctx->tags->breserved_tags; + tag_offset = 0; + } + + if (!hctx_may_queue(rq->mq_hctx, bt)) + return false; + tag = __sbitmap_queue_get(bt); + if (tag == BLK_MQ_NO_TAG) + return false; + + rq->tag = tag + tag_offset; + if (shared) { + rq->rq_flags |= RQF_MQ_INFLIGHT; + atomic_inc(&rq->mq_hctx->nr_active); + } + rq->mq_hctx->tags->rqs[rq->tag] = rq; + return true; +} + +static bool blk_mq_get_driver_tag(struct request *rq) +{ + if (rq->tag != BLK_MQ_NO_TAG) + return true; + return __blk_mq_get_driver_tag(rq); +} + static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode, int flags, void *key) { -- 2.25.2 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 1/3] blk-mq: move blk_mq_get_driver_tag into blk-mq.c 2020-06-30 2:23 ` [PATCH 1/3] blk-mq: move blk_mq_get_driver_tag into blk-mq.c Ming Lei @ 2020-06-30 4:57 ` Christoph Hellwig 2020-06-30 6:10 ` Hannes Reinecke 1 sibling, 0 replies; 12+ messages in thread From: Christoph Hellwig @ 2020-06-30 4:57 UTC (permalink / raw) To: Ming Lei; +Cc: Jens Axboe, linux-block, Christoph Hellwig On Tue, Jun 30, 2020 at 10:23:54AM +0800, Ming Lei wrote: > blk_mq_get_driver_tag() is only used by blk-mq.c and is supposed to > stay in blk-mq.c, so move it and preparing for cleanup code of > get/put driver tag. > > Meantime hctx_may_queue() is moved to header file and it is fine > since it is defined as inline always. hctx_may_queue looks pretty big for an inline function to start with. But except for that this looks good: Reviewed-by: Christoph Hellwig <hch@lst.de> ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/3] blk-mq: move blk_mq_get_driver_tag into blk-mq.c 2020-06-30 2:23 ` [PATCH 1/3] blk-mq: move blk_mq_get_driver_tag into blk-mq.c Ming Lei 2020-06-30 4:57 ` Christoph Hellwig @ 2020-06-30 6:10 ` Hannes Reinecke 1 sibling, 0 replies; 12+ messages in thread From: Hannes Reinecke @ 2020-06-30 6:10 UTC (permalink / raw) To: Ming Lei, Jens Axboe; +Cc: linux-block, Christoph Hellwig On 6/30/20 4:23 AM, Ming Lei wrote: > blk_mq_get_driver_tag() is only used by blk-mq.c and is supposed to > stay in blk-mq.c, so move it and preparing for cleanup code of > get/put driver tag. > > Meantime hctx_may_queue() is moved to header file and it is fine > since it is defined as inline always. > > No functional change. > > Cc: Christoph Hellwig <hch@infradead.org> > Signed-off-by: Ming Lei <ming.lei@redhat.com> > --- > block/blk-mq-tag.c | 58 ---------------------------------------------- > block/blk-mq-tag.h | 39 ++++++++++++++++++++++++------- > block/blk-mq.c | 34 +++++++++++++++++++++++++++ > 3 files changed, 65 insertions(+), 66 deletions(-) > Curiously; I stumbled across this yesterday, too. Reviewed-by: Hannes Reinecke <hare@suse.de> Cheers, Hannes -- Dr. Hannes Reinecke Teamlead Storage & Networking hare@suse.de +49 911 74053 688 SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 2/3] blk-mq: move blk_mq_put_driver_tag() into blk-mq.c 2020-06-30 2:23 [PATCH 0/3] blk-mq: driver tag related cleanup Ming Lei 2020-06-30 2:23 ` [PATCH 1/3] blk-mq: move blk_mq_get_driver_tag into blk-mq.c Ming Lei @ 2020-06-30 2:23 ` Ming Lei 2020-06-30 4:58 ` Christoph Hellwig 2020-06-30 6:10 ` Hannes Reinecke 2020-06-30 2:23 ` [PATCH 3/3] blk-mq: centralise related handling into blk_mq_get_driver_tag Ming Lei 2 siblings, 2 replies; 12+ messages in thread From: Ming Lei @ 2020-06-30 2:23 UTC (permalink / raw) To: Jens Axboe; +Cc: linux-block, Ming Lei, Christoph Hellwig It is used by blk-mq.c only, so move it to the source file. Suggested-by: Christoph Hellwig <hch@infradead.org> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Ming Lei <ming.lei@redhat.com> --- block/blk-mq.c | 20 ++++++++++++++++++++ block/blk-mq.h | 20 -------------------- 2 files changed, 20 insertions(+), 20 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 0438bf388fde..cabeeeb3d56c 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -670,6 +670,26 @@ static inline bool blk_mq_complete_need_ipi(struct request *rq) return cpu_online(rq->mq_ctx->cpu); } +static void __blk_mq_put_driver_tag(struct blk_mq_hw_ctx *hctx, + struct request *rq) +{ + blk_mq_put_tag(hctx->tags, rq->mq_ctx, rq->tag); + rq->tag = BLK_MQ_NO_TAG; + + if (rq->rq_flags & RQF_MQ_INFLIGHT) { + rq->rq_flags &= ~RQF_MQ_INFLIGHT; + atomic_dec(&hctx->nr_active); + } +} + +static inline void blk_mq_put_driver_tag(struct request *rq) +{ + if (rq->tag == BLK_MQ_NO_TAG || rq->internal_tag == BLK_MQ_NO_TAG) + return; + + __blk_mq_put_driver_tag(rq->mq_hctx, rq); +} + bool blk_mq_complete_request_remote(struct request *rq) { WRITE_ONCE(rq->state, MQ_RQ_COMPLETE); diff --git a/block/blk-mq.h b/block/blk-mq.h index b3ce0f3a2ad2..a62ca18b5bde 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -196,26 +196,6 @@ static inline bool blk_mq_get_dispatch_budget(struct blk_mq_hw_ctx *hctx) return true; } -static inline void __blk_mq_put_driver_tag(struct blk_mq_hw_ctx *hctx, - struct request *rq) -{ - blk_mq_put_tag(hctx->tags, rq->mq_ctx, rq->tag); - rq->tag = BLK_MQ_NO_TAG; - - if (rq->rq_flags & RQF_MQ_INFLIGHT) { - rq->rq_flags &= ~RQF_MQ_INFLIGHT; - atomic_dec(&hctx->nr_active); - } -} - -static inline void blk_mq_put_driver_tag(struct request *rq) -{ - if (rq->tag == BLK_MQ_NO_TAG || rq->internal_tag == BLK_MQ_NO_TAG) - return; - - __blk_mq_put_driver_tag(rq->mq_hctx, rq); -} - static inline void blk_mq_clear_mq_map(struct blk_mq_queue_map *qmap) { int cpu; -- 2.25.2 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 2/3] blk-mq: move blk_mq_put_driver_tag() into blk-mq.c 2020-06-30 2:23 ` [PATCH 2/3] blk-mq: move blk_mq_put_driver_tag() " Ming Lei @ 2020-06-30 4:58 ` Christoph Hellwig 2020-06-30 6:10 ` Hannes Reinecke 1 sibling, 0 replies; 12+ messages in thread From: Christoph Hellwig @ 2020-06-30 4:58 UTC (permalink / raw) To: Ming Lei; +Cc: Jens Axboe, linux-block, Christoph Hellwig On Tue, Jun 30, 2020 at 10:23:55AM +0800, Ming Lei wrote: > It is used by blk-mq.c only, so move it to the source file. Looks good, Reviewed-by: Christoph Hellwig <hch@lst.de> ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/3] blk-mq: move blk_mq_put_driver_tag() into blk-mq.c 2020-06-30 2:23 ` [PATCH 2/3] blk-mq: move blk_mq_put_driver_tag() " Ming Lei 2020-06-30 4:58 ` Christoph Hellwig @ 2020-06-30 6:10 ` Hannes Reinecke 1 sibling, 0 replies; 12+ messages in thread From: Hannes Reinecke @ 2020-06-30 6:10 UTC (permalink / raw) To: Ming Lei, Jens Axboe; +Cc: linux-block, Christoph Hellwig On 6/30/20 4:23 AM, Ming Lei wrote: > It is used by blk-mq.c only, so move it to the source file. > > Suggested-by: Christoph Hellwig <hch@infradead.org> > Cc: Christoph Hellwig <hch@infradead.org> > Signed-off-by: Ming Lei <ming.lei@redhat.com> > --- > block/blk-mq.c | 20 ++++++++++++++++++++ > block/blk-mq.h | 20 -------------------- > 2 files changed, 20 insertions(+), 20 deletions(-) > Reviewed-by: Hannes Reinecke <hare@suse.de> Cheers, Hannes -- Dr. Hannes Reinecke Teamlead Storage & Networking hare@suse.de +49 911 74053 688 SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 3/3] blk-mq: centralise related handling into blk_mq_get_driver_tag 2020-06-30 2:23 [PATCH 0/3] blk-mq: driver tag related cleanup Ming Lei 2020-06-30 2:23 ` [PATCH 1/3] blk-mq: move blk_mq_get_driver_tag into blk-mq.c Ming Lei 2020-06-30 2:23 ` [PATCH 2/3] blk-mq: move blk_mq_put_driver_tag() " Ming Lei @ 2020-06-30 2:23 ` Ming Lei 2020-06-30 5:05 ` Christoph Hellwig 2020-06-30 6:26 ` Hannes Reinecke 2 siblings, 2 replies; 12+ messages in thread From: Ming Lei @ 2020-06-30 2:23 UTC (permalink / raw) To: Jens Axboe; +Cc: linux-block, Ming Lei, Christoph Hellwig Move blk_mq_tag_busy(), .nr_active update and request assignment into blk_mq_get_driver_tag(), all are good to do during getting driver tag. Meantime blk-flush related code is simplified and flush request needn't to update the request table manually any more. Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Ming Lei <ming.lei@redhat.com> --- block/blk-flush.c | 13 ++++--------- block/blk-mq-tag.h | 12 ------------ block/blk-mq.c | 33 +++++++++++++-------------------- block/blk.h | 5 ----- 4 files changed, 17 insertions(+), 46 deletions(-) diff --git a/block/blk-flush.c b/block/blk-flush.c index 21108a550fbf..3b0c5cfe922a 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -236,12 +236,10 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error) error = fq->rq_status; hctx = flush_rq->mq_hctx; - if (!q->elevator) { - blk_mq_tag_set_rq(hctx, flush_rq->tag, fq->orig_rq); + if (!q->elevator) flush_rq->tag = -1; - } else { + else flush_rq->internal_tag = -1; - } running = &fq->flush_queue[fq->flush_running_idx]; BUG_ON(fq->flush_pending_idx == fq->flush_running_idx); @@ -315,13 +313,10 @@ static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq, flush_rq->mq_ctx = first_rq->mq_ctx; flush_rq->mq_hctx = first_rq->mq_hctx; - if (!q->elevator) { - fq->orig_rq = first_rq; + if (!q->elevator) flush_rq->tag = first_rq->tag; - blk_mq_tag_set_rq(flush_rq->mq_hctx, first_rq->tag, flush_rq); - } else { + else flush_rq->internal_tag = first_rq->internal_tag; - } flush_rq->cmd_flags = REQ_OP_FLUSH | REQ_PREFLUSH; flush_rq->cmd_flags |= (flags & REQ_DRV) | (flags & REQ_FAILFAST_MASK); diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h index 3945c7f5b944..b1acac518c4e 100644 --- a/block/blk-mq-tag.h +++ b/block/blk-mq-tag.h @@ -101,18 +101,6 @@ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx, return atomic_read(&hctx->nr_active) < depth; } -/* - * This helper should only be used for flush request to share tag - * with the request cloned from, and both the two requests can't be - * in flight at the same time. The caller has to make sure the tag - * can't be freed. - */ -static inline void blk_mq_tag_set_rq(struct blk_mq_hw_ctx *hctx, - unsigned int tag, struct request *rq) -{ - hctx->tags->rqs[tag] = rq; -} - static inline bool blk_mq_tag_is_reserved(struct blk_mq_tags *tags, unsigned int tag) { diff --git a/block/blk-mq.c b/block/blk-mq.c index cabeeeb3d56c..44b101757d33 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -277,26 +277,20 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, { struct blk_mq_tags *tags = blk_mq_tags_from_data(data); struct request *rq = tags->static_rqs[tag]; - req_flags_t rq_flags = 0; if (data->flags & BLK_MQ_REQ_INTERNAL) { rq->tag = BLK_MQ_NO_TAG; rq->internal_tag = tag; } else { - if (data->hctx->flags & BLK_MQ_F_TAG_SHARED) { - rq_flags = RQF_MQ_INFLIGHT; - atomic_inc(&data->hctx->nr_active); - } rq->tag = tag; rq->internal_tag = BLK_MQ_NO_TAG; - data->hctx->tags->rqs[rq->tag] = rq; } /* csd/requeue_work/fifo_time is initialized before use */ rq->q = data->q; rq->mq_ctx = data->ctx; rq->mq_hctx = data->hctx; - rq->rq_flags = rq_flags; + rq->rq_flags = 0; rq->cmd_flags = data->cmd_flags; if (data->flags & BLK_MQ_REQ_PREEMPT) rq->rq_flags |= RQF_PREEMPT; @@ -380,8 +374,6 @@ static struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data) retry: data->ctx = blk_mq_get_ctx(q); data->hctx = blk_mq_map_queue(q, data->cmd_flags, data->ctx); - if (!(data->flags & BLK_MQ_REQ_INTERNAL)) - blk_mq_tag_busy(data->hctx); /* * Waiting allocations only fail because of an inactive hctx. In that @@ -478,8 +470,6 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q, if (q->elevator) data.flags |= BLK_MQ_REQ_INTERNAL; - else - blk_mq_tag_busy(data.hctx); ret = -EWOULDBLOCK; tag = blk_mq_get_tag(&data); @@ -1131,7 +1121,6 @@ static bool __blk_mq_get_driver_tag(struct request *rq) { struct sbitmap_queue *bt = &rq->mq_hctx->tags->bitmap_tags; unsigned int tag_offset = rq->mq_hctx->tags->nr_reserved_tags; - bool shared = blk_mq_tag_busy(rq->mq_hctx); int tag; if (blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, rq->internal_tag)) { @@ -1146,19 +1135,23 @@ static bool __blk_mq_get_driver_tag(struct request *rq) return false; rq->tag = tag + tag_offset; - if (shared) { - rq->rq_flags |= RQF_MQ_INFLIGHT; - atomic_inc(&rq->mq_hctx->nr_active); - } - rq->mq_hctx->tags->rqs[rq->tag] = rq; return true; } static bool blk_mq_get_driver_tag(struct request *rq) { - if (rq->tag != BLK_MQ_NO_TAG) - return true; - return __blk_mq_get_driver_tag(rq); + struct blk_mq_hw_ctx *hctx = rq->mq_hctx; + bool shared = blk_mq_tag_busy(rq->mq_hctx); + + if (rq->tag == BLK_MQ_NO_TAG && !__blk_mq_get_driver_tag(rq)) + return false; + + if (shared) { + rq->rq_flags |= RQF_MQ_INFLIGHT; + atomic_inc(&hctx->nr_active); + } + hctx->tags->rqs[rq->tag] = rq; + return true; } static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode, diff --git a/block/blk.h b/block/blk.h index 3a120a070dac..459790f9783d 100644 --- a/block/blk.h +++ b/block/blk.h @@ -25,11 +25,6 @@ struct blk_flush_queue { struct list_head flush_data_in_flight; struct request *flush_rq; - /* - * flush_rq shares tag with this rq, both can't be active - * at the same time - */ - struct request *orig_rq; struct lock_class_key key; spinlock_t mq_flush_lock; }; -- 2.25.2 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 3/3] blk-mq: centralise related handling into blk_mq_get_driver_tag 2020-06-30 2:23 ` [PATCH 3/3] blk-mq: centralise related handling into blk_mq_get_driver_tag Ming Lei @ 2020-06-30 5:05 ` Christoph Hellwig 2020-06-30 6:13 ` Ming Lei 2020-06-30 6:26 ` Hannes Reinecke 1 sibling, 1 reply; 12+ messages in thread From: Christoph Hellwig @ 2020-06-30 5:05 UTC (permalink / raw) To: Ming Lei; +Cc: Jens Axboe, linux-block, Christoph Hellwig > index 21108a550fbf..3b0c5cfe922a 100644 > --- a/block/blk-flush.c > +++ b/block/blk-flush.c > @@ -236,12 +236,10 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error) > error = fq->rq_status; > > hctx = flush_rq->mq_hctx; > + if (!q->elevator) > flush_rq->tag = -1; > + else > flush_rq->internal_tag = -1; These should switch to BLK_MQ_NO_TAG which you're at it. > - if (!(data->flags & BLK_MQ_REQ_INTERNAL)) > - blk_mq_tag_busy(data->hctx); BLK_MQ_REQ_INTERNAL is gone now, so this won't apply. > static bool blk_mq_get_driver_tag(struct request *rq) > { > + struct blk_mq_hw_ctx *hctx = rq->mq_hctx; > + bool shared = blk_mq_tag_busy(rq->mq_hctx); > + > + if (rq->tag == BLK_MQ_NO_TAG && !__blk_mq_get_driver_tag(rq)) > + return false; > + > + if (shared) { > + rq->rq_flags |= RQF_MQ_INFLIGHT; > + atomic_inc(&hctx->nr_active); > + } > + hctx->tags->rqs[rq->tag] = rq; > + return true; > } The function seems a bit misnamed now, although I don't have a good suggestion for a better name. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 3/3] blk-mq: centralise related handling into blk_mq_get_driver_tag 2020-06-30 5:05 ` Christoph Hellwig @ 2020-06-30 6:13 ` Ming Lei 0 siblings, 0 replies; 12+ messages in thread From: Ming Lei @ 2020-06-30 6:13 UTC (permalink / raw) To: Christoph Hellwig; +Cc: Jens Axboe, linux-block On Tue, Jun 30, 2020 at 06:05:57AM +0100, Christoph Hellwig wrote: > > index 21108a550fbf..3b0c5cfe922a 100644 > > --- a/block/blk-flush.c > > +++ b/block/blk-flush.c > > @@ -236,12 +236,10 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error) > > error = fq->rq_status; > > > > hctx = flush_rq->mq_hctx; > > + if (!q->elevator) > > flush_rq->tag = -1; > > + else > > flush_rq->internal_tag = -1; > > These should switch to BLK_MQ_NO_TAG which you're at it. OK, we can do that in this patch. > > > - if (!(data->flags & BLK_MQ_REQ_INTERNAL)) > > - blk_mq_tag_busy(data->hctx); > > BLK_MQ_REQ_INTERNAL is gone now, so this won't apply. blk_mq_tag_busy() is needed for either none and io scheduler, so it is moved into blk_mq_get_driver_tag(), then check on BLK_MQ_REQ_INTERNAL is gone. > > > static bool blk_mq_get_driver_tag(struct request *rq) > > { > > + struct blk_mq_hw_ctx *hctx = rq->mq_hctx; > > + bool shared = blk_mq_tag_busy(rq->mq_hctx); > > + > > + if (rq->tag == BLK_MQ_NO_TAG && !__blk_mq_get_driver_tag(rq)) > > + return false; > > + > > + if (shared) { > > + rq->rq_flags |= RQF_MQ_INFLIGHT; > > + atomic_inc(&hctx->nr_active); > > + } > > + hctx->tags->rqs[rq->tag] = rq; > > + return true; > > } > > The function seems a bit misnamed now, although I don't have a good > suggestion for a better name. I think it is fine to leave it as-is, since what the patch does is just to move blk_mq_tag_busy() & the RQF_MQ_INFLIGHT part from __blk_mq_get_driver_tag to blk_mq_get_driver_tag(). Thanks, Ming ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 3/3] blk-mq: centralise related handling into blk_mq_get_driver_tag 2020-06-30 2:23 ` [PATCH 3/3] blk-mq: centralise related handling into blk_mq_get_driver_tag Ming Lei 2020-06-30 5:05 ` Christoph Hellwig @ 2020-06-30 6:26 ` Hannes Reinecke 2020-06-30 6:43 ` Ming Lei 1 sibling, 1 reply; 12+ messages in thread From: Hannes Reinecke @ 2020-06-30 6:26 UTC (permalink / raw) To: Ming Lei, Jens Axboe; +Cc: linux-block, Christoph Hellwig On 6/30/20 4:23 AM, Ming Lei wrote: > Move blk_mq_tag_busy(), .nr_active update and request assignment into > blk_mq_get_driver_tag(), all are good to do during getting driver tag. > > Meantime blk-flush related code is simplified and flush request needn't > to update the request table manually any more. > > Cc: Christoph Hellwig <hch@infradead.org> > Signed-off-by: Ming Lei <ming.lei@redhat.com> > --- > block/blk-flush.c | 13 ++++--------- > block/blk-mq-tag.h | 12 ------------ > block/blk-mq.c | 33 +++++++++++++-------------------- > block/blk.h | 5 ----- > 4 files changed, 17 insertions(+), 46 deletions(-) > > diff --git a/block/blk-flush.c b/block/blk-flush.c > index 21108a550fbf..3b0c5cfe922a 100644 > --- a/block/blk-flush.c > +++ b/block/blk-flush.c > @@ -236,12 +236,10 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error) > error = fq->rq_status; > > hctx = flush_rq->mq_hctx; > - if (!q->elevator) { > - blk_mq_tag_set_rq(hctx, flush_rq->tag, fq->orig_rq); > + if (!q->elevator) > flush_rq->tag = -1; > - } else { > + else > flush_rq->internal_tag = -1; > - } > > running = &fq->flush_queue[fq->flush_running_idx]; > BUG_ON(fq->flush_pending_idx == fq->flush_running_idx); > @@ -315,13 +313,10 @@ static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq, > flush_rq->mq_ctx = first_rq->mq_ctx; > flush_rq->mq_hctx = first_rq->mq_hctx; > > - if (!q->elevator) { > - fq->orig_rq = first_rq; > + if (!q->elevator) > flush_rq->tag = first_rq->tag; > - blk_mq_tag_set_rq(flush_rq->mq_hctx, first_rq->tag, flush_rq); > - } else { > + else > flush_rq->internal_tag = first_rq->internal_tag; > - } > > flush_rq->cmd_flags = REQ_OP_FLUSH | REQ_PREFLUSH; > flush_rq->cmd_flags |= (flags & REQ_DRV) | (flags & REQ_FAILFAST_MASK); > diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h > index 3945c7f5b944..b1acac518c4e 100644 > --- a/block/blk-mq-tag.h > +++ b/block/blk-mq-tag.h > @@ -101,18 +101,6 @@ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx, > return atomic_read(&hctx->nr_active) < depth; > } > > -/* > - * This helper should only be used for flush request to share tag > - * with the request cloned from, and both the two requests can't be > - * in flight at the same time. The caller has to make sure the tag > - * can't be freed. > - */ > -static inline void blk_mq_tag_set_rq(struct blk_mq_hw_ctx *hctx, > - unsigned int tag, struct request *rq) > -{ > - hctx->tags->rqs[tag] = rq; > -} > - > static inline bool blk_mq_tag_is_reserved(struct blk_mq_tags *tags, > unsigned int tag) > { > diff --git a/block/blk-mq.c b/block/blk-mq.c > index cabeeeb3d56c..44b101757d33 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -277,26 +277,20 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, > { > struct blk_mq_tags *tags = blk_mq_tags_from_data(data); > struct request *rq = tags->static_rqs[tag]; > - req_flags_t rq_flags = 0; > > if (data->flags & BLK_MQ_REQ_INTERNAL) { > rq->tag = BLK_MQ_NO_TAG; > rq->internal_tag = tag; > } else { > - if (data->hctx->flags & BLK_MQ_F_TAG_SHARED) { > - rq_flags = RQF_MQ_INFLIGHT; > - atomic_inc(&data->hctx->nr_active); > - } > rq->tag = tag; > rq->internal_tag = BLK_MQ_NO_TAG; > - data->hctx->tags->rqs[rq->tag] = rq; > } > > /* csd/requeue_work/fifo_time is initialized before use */ > rq->q = data->q; > rq->mq_ctx = data->ctx; > rq->mq_hctx = data->hctx; > - rq->rq_flags = rq_flags; > + rq->rq_flags = 0; > rq->cmd_flags = data->cmd_flags; > if (data->flags & BLK_MQ_REQ_PREEMPT) > rq->rq_flags |= RQF_PREEMPT; > @@ -380,8 +374,6 @@ static struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data) > retry: > data->ctx = blk_mq_get_ctx(q); > data->hctx = blk_mq_map_queue(q, data->cmd_flags, data->ctx); > - if (!(data->flags & BLK_MQ_REQ_INTERNAL)) > - blk_mq_tag_busy(data->hctx); > > /* > * Waiting allocations only fail because of an inactive hctx. In that > @@ -478,8 +470,6 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q, > > if (q->elevator) > data.flags |= BLK_MQ_REQ_INTERNAL; > - else > - blk_mq_tag_busy(data.hctx); > > ret = -EWOULDBLOCK; > tag = blk_mq_get_tag(&data); > @@ -1131,7 +1121,6 @@ static bool __blk_mq_get_driver_tag(struct request *rq) > { > struct sbitmap_queue *bt = &rq->mq_hctx->tags->bitmap_tags; > unsigned int tag_offset = rq->mq_hctx->tags->nr_reserved_tags; > - bool shared = blk_mq_tag_busy(rq->mq_hctx); > int tag; > > if (blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, rq->internal_tag)) { > @@ -1146,19 +1135,23 @@ static bool __blk_mq_get_driver_tag(struct request *rq) > return false; > > rq->tag = tag + tag_offset; > - if (shared) { > - rq->rq_flags |= RQF_MQ_INFLIGHT; > - atomic_inc(&rq->mq_hctx->nr_active); > - } > - rq->mq_hctx->tags->rqs[rq->tag] = rq; > return true; > } > > static bool blk_mq_get_driver_tag(struct request *rq) > { > - if (rq->tag != BLK_MQ_NO_TAG) > - return true; > - return __blk_mq_get_driver_tag(rq); > + struct blk_mq_hw_ctx *hctx = rq->mq_hctx; > + bool shared = blk_mq_tag_busy(rq->mq_hctx); > + > + if (rq->tag == BLK_MQ_NO_TAG && !__blk_mq_get_driver_tag(rq)) > + return false; > + > + if (shared) { > + rq->rq_flags |= RQF_MQ_INFLIGHT; > + atomic_inc(&hctx->nr_active); > + } > + hctx->tags->rqs[rq->tag] = rq; > + return true; > } > > static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode, > diff --git a/block/blk.h b/block/blk.h > index 3a120a070dac..459790f9783d 100644 > --- a/block/blk.h > +++ b/block/blk.h > @@ -25,11 +25,6 @@ struct blk_flush_queue { > struct list_head flush_data_in_flight; > struct request *flush_rq; > > - /* > - * flush_rq shares tag with this rq, both can't be active > - * at the same time > - */ > - struct request *orig_rq; > struct lock_class_key key; > spinlock_t mq_flush_lock; > }; > Can you give some more explanation why it's safe to move blk_mq_tag_busy() into blk_mq_get_driver_tag(), seeing that it was called before blk_mq_get_tag() initially? Cheers, Hannes -- Dr. Hannes Reinecke Teamlead Storage & Networking hare@suse.de +49 911 74053 688 SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 3/3] blk-mq: centralise related handling into blk_mq_get_driver_tag 2020-06-30 6:26 ` Hannes Reinecke @ 2020-06-30 6:43 ` Ming Lei 0 siblings, 0 replies; 12+ messages in thread From: Ming Lei @ 2020-06-30 6:43 UTC (permalink / raw) To: Hannes Reinecke; +Cc: Jens Axboe, linux-block, Christoph Hellwig On Tue, Jun 30, 2020 at 08:26:11AM +0200, Hannes Reinecke wrote: > On 6/30/20 4:23 AM, Ming Lei wrote: > > Move blk_mq_tag_busy(), .nr_active update and request assignment into > > blk_mq_get_driver_tag(), all are good to do during getting driver tag. > > > > Meantime blk-flush related code is simplified and flush request needn't > > to update the request table manually any more. > > > > Cc: Christoph Hellwig <hch@infradead.org> > > Signed-off-by: Ming Lei <ming.lei@redhat.com> > > --- > > block/blk-flush.c | 13 ++++--------- > > block/blk-mq-tag.h | 12 ------------ > > block/blk-mq.c | 33 +++++++++++++-------------------- > > block/blk.h | 5 ----- > > 4 files changed, 17 insertions(+), 46 deletions(-) > > > > diff --git a/block/blk-flush.c b/block/blk-flush.c > > index 21108a550fbf..3b0c5cfe922a 100644 > > --- a/block/blk-flush.c > > +++ b/block/blk-flush.c > > @@ -236,12 +236,10 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error) > > error = fq->rq_status; > > hctx = flush_rq->mq_hctx; > > - if (!q->elevator) { > > - blk_mq_tag_set_rq(hctx, flush_rq->tag, fq->orig_rq); > > + if (!q->elevator) > > flush_rq->tag = -1; > > - } else { > > + else > > flush_rq->internal_tag = -1; > > - } > > running = &fq->flush_queue[fq->flush_running_idx]; > > BUG_ON(fq->flush_pending_idx == fq->flush_running_idx); > > @@ -315,13 +313,10 @@ static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq, > > flush_rq->mq_ctx = first_rq->mq_ctx; > > flush_rq->mq_hctx = first_rq->mq_hctx; > > - if (!q->elevator) { > > - fq->orig_rq = first_rq; > > + if (!q->elevator) > > flush_rq->tag = first_rq->tag; > > - blk_mq_tag_set_rq(flush_rq->mq_hctx, first_rq->tag, flush_rq); > > - } else { > > + else > > flush_rq->internal_tag = first_rq->internal_tag; > > - } > > flush_rq->cmd_flags = REQ_OP_FLUSH | REQ_PREFLUSH; > > flush_rq->cmd_flags |= (flags & REQ_DRV) | (flags & REQ_FAILFAST_MASK); > > diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h > > index 3945c7f5b944..b1acac518c4e 100644 > > --- a/block/blk-mq-tag.h > > +++ b/block/blk-mq-tag.h > > @@ -101,18 +101,6 @@ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx, > > return atomic_read(&hctx->nr_active) < depth; > > } > > -/* > > - * This helper should only be used for flush request to share tag > > - * with the request cloned from, and both the two requests can't be > > - * in flight at the same time. The caller has to make sure the tag > > - * can't be freed. > > - */ > > -static inline void blk_mq_tag_set_rq(struct blk_mq_hw_ctx *hctx, > > - unsigned int tag, struct request *rq) > > -{ > > - hctx->tags->rqs[tag] = rq; > > -} > > - > > static inline bool blk_mq_tag_is_reserved(struct blk_mq_tags *tags, > > unsigned int tag) > > { > > diff --git a/block/blk-mq.c b/block/blk-mq.c > > index cabeeeb3d56c..44b101757d33 100644 > > --- a/block/blk-mq.c > > +++ b/block/blk-mq.c > > @@ -277,26 +277,20 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, > > { > > struct blk_mq_tags *tags = blk_mq_tags_from_data(data); > > struct request *rq = tags->static_rqs[tag]; > > - req_flags_t rq_flags = 0; > > if (data->flags & BLK_MQ_REQ_INTERNAL) { > > rq->tag = BLK_MQ_NO_TAG; > > rq->internal_tag = tag; > > } else { > > - if (data->hctx->flags & BLK_MQ_F_TAG_SHARED) { > > - rq_flags = RQF_MQ_INFLIGHT; > > - atomic_inc(&data->hctx->nr_active); > > - } > > rq->tag = tag; > > rq->internal_tag = BLK_MQ_NO_TAG; > > - data->hctx->tags->rqs[rq->tag] = rq; > > } > > /* csd/requeue_work/fifo_time is initialized before use */ > > rq->q = data->q; > > rq->mq_ctx = data->ctx; > > rq->mq_hctx = data->hctx; > > - rq->rq_flags = rq_flags; > > + rq->rq_flags = 0; > > rq->cmd_flags = data->cmd_flags; > > if (data->flags & BLK_MQ_REQ_PREEMPT) > > rq->rq_flags |= RQF_PREEMPT; > > @@ -380,8 +374,6 @@ static struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data) > > retry: > > data->ctx = blk_mq_get_ctx(q); > > data->hctx = blk_mq_map_queue(q, data->cmd_flags, data->ctx); > > - if (!(data->flags & BLK_MQ_REQ_INTERNAL)) > > - blk_mq_tag_busy(data->hctx); > > /* > > * Waiting allocations only fail because of an inactive hctx. In that > > @@ -478,8 +470,6 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q, > > if (q->elevator) > > data.flags |= BLK_MQ_REQ_INTERNAL; > > - else > > - blk_mq_tag_busy(data.hctx); > > ret = -EWOULDBLOCK; > > tag = blk_mq_get_tag(&data); > > @@ -1131,7 +1121,6 @@ static bool __blk_mq_get_driver_tag(struct request *rq) > > { > > struct sbitmap_queue *bt = &rq->mq_hctx->tags->bitmap_tags; > > unsigned int tag_offset = rq->mq_hctx->tags->nr_reserved_tags; > > - bool shared = blk_mq_tag_busy(rq->mq_hctx); > > int tag; > > if (blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, rq->internal_tag)) { > > @@ -1146,19 +1135,23 @@ static bool __blk_mq_get_driver_tag(struct request *rq) > > return false; > > rq->tag = tag + tag_offset; > > - if (shared) { > > - rq->rq_flags |= RQF_MQ_INFLIGHT; > > - atomic_inc(&rq->mq_hctx->nr_active); > > - } > > - rq->mq_hctx->tags->rqs[rq->tag] = rq; > > return true; > > } > > static bool blk_mq_get_driver_tag(struct request *rq) > > { > > - if (rq->tag != BLK_MQ_NO_TAG) > > - return true; > > - return __blk_mq_get_driver_tag(rq); > > + struct blk_mq_hw_ctx *hctx = rq->mq_hctx; > > + bool shared = blk_mq_tag_busy(rq->mq_hctx); > > + > > + if (rq->tag == BLK_MQ_NO_TAG && !__blk_mq_get_driver_tag(rq)) > > + return false; > > + > > + if (shared) { > > + rq->rq_flags |= RQF_MQ_INFLIGHT; > > + atomic_inc(&hctx->nr_active); > > + } > > + hctx->tags->rqs[rq->tag] = rq; > > + return true; > > } > > static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode, > > diff --git a/block/blk.h b/block/blk.h > > index 3a120a070dac..459790f9783d 100644 > > --- a/block/blk.h > > +++ b/block/blk.h > > @@ -25,11 +25,6 @@ struct blk_flush_queue { > > struct list_head flush_data_in_flight; > > struct request *flush_rq; > > - /* > > - * flush_rq shares tag with this rq, both can't be active > > - * at the same time > > - */ > > - struct request *orig_rq; > > struct lock_class_key key; > > spinlock_t mq_flush_lock; > > }; > > > Can you give some more explanation why it's safe to move blk_mq_tag_busy() > into blk_mq_get_driver_tag(), seeing that it was called before > blk_mq_get_tag() initially? In theory, it should be done before blk_mq_get_tag() in case of none because hctx_may_queue() will use this info for deciding if one driver tag can be allocated for this lun/ns for the sake of fairness. However, blk_mq_tag_busy() is just one shot thing, and very short time of unfairness shouldn't be a big deal. Looks you guys care this change, I will avoid this kind of change in V2. Thanks, Ming ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2020-06-30 6:43 UTC | newest] Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-06-30 2:23 [PATCH 0/3] blk-mq: driver tag related cleanup Ming Lei 2020-06-30 2:23 ` [PATCH 1/3] blk-mq: move blk_mq_get_driver_tag into blk-mq.c Ming Lei 2020-06-30 4:57 ` Christoph Hellwig 2020-06-30 6:10 ` Hannes Reinecke 2020-06-30 2:23 ` [PATCH 2/3] blk-mq: move blk_mq_put_driver_tag() " Ming Lei 2020-06-30 4:58 ` Christoph Hellwig 2020-06-30 6:10 ` Hannes Reinecke 2020-06-30 2:23 ` [PATCH 3/3] blk-mq: centralise related handling into blk_mq_get_driver_tag Ming Lei 2020-06-30 5:05 ` Christoph Hellwig 2020-06-30 6:13 ` Ming Lei 2020-06-30 6:26 ` Hannes Reinecke 2020-06-30 6:43 ` Ming Lei
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).