From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com ([209.132.183.28]:45146 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751140AbdHEG53 (ORCPT ); Sat, 5 Aug 2017 02:57:29 -0400 From: Ming Lei To: Jens Axboe , linux-block@vger.kernel.org, Christoph Hellwig Cc: Bart Van Assche , Laurence Oberman , Ming Lei Subject: [PATCH V2 03/20] blk-mq: introduce blk_mq_dispatch_rq_from_ctx() Date: Sat, 5 Aug 2017 14:56:48 +0800 Message-Id: <20170805065705.12989-4-ming.lei@redhat.com> In-Reply-To: <20170805065705.12989-1-ming.lei@redhat.com> References: <20170805065705.12989-1-ming.lei@redhat.com> Sender: linux-block-owner@vger.kernel.org List-Id: linux-block@vger.kernel.org This function is introduced for dequeuing request from sw queue so that we can dispatch it in scheduler's way. More importantly, for some SCSI devices, driver tags are host wide, and the number is quite big, but each lun has very limited queue depth. This function is introduced for avoiding to dequeue too many requests from sw queue when ->dispatch isn't flushed completely. Signed-off-by: Ming Lei --- block/blk-mq.c | 38 ++++++++++++++++++++++++++++++++++++++ block/blk-mq.h | 2 ++ 2 files changed, 40 insertions(+) diff --git a/block/blk-mq.c b/block/blk-mq.c index 041f7b7fa0d6..d7a89d009f62 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -842,6 +842,44 @@ void blk_mq_flush_busy_ctxs(struct blk_mq_hw_ctx *hctx, struct list_head *list) } EXPORT_SYMBOL_GPL(blk_mq_flush_busy_ctxs); +struct dispatch_rq_data { + struct blk_mq_hw_ctx *hctx; + struct request *rq; +}; + +static bool dispatch_rq_from_ctx(struct sbitmap *sb, unsigned int bitnr, void *data) +{ + struct dispatch_rq_data *dispatch_data = data; + struct blk_mq_hw_ctx *hctx = dispatch_data->hctx; + struct blk_mq_ctx *ctx = hctx->ctxs[bitnr]; + + spin_lock(&ctx->lock); + if (unlikely(!list_empty(&ctx->rq_list))) { + dispatch_data->rq = list_entry_rq(ctx->rq_list.next); + list_del_init(&dispatch_data->rq->queuelist); + if (list_empty(&ctx->rq_list)) + sbitmap_clear_bit(sb, bitnr); + } + spin_unlock(&ctx->lock); + + return !dispatch_data->rq; +} + +struct request *blk_mq_dispatch_rq_from_ctx(struct blk_mq_hw_ctx *hctx, + struct blk_mq_ctx *start) +{ + unsigned off = start ? start->index_hw : 0; + struct dispatch_rq_data data = { + .hctx = hctx, + .rq = NULL, + }; + + __sbitmap_for_each_set(&hctx->ctx_map, off, + dispatch_rq_from_ctx, &data); + + return data.rq; +} + static inline unsigned int queued_to_index(unsigned int queued) { if (!queued) diff --git a/block/blk-mq.h b/block/blk-mq.h index 60b01c0309bc..2bfb1254841b 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -35,6 +35,8 @@ void blk_mq_flush_busy_ctxs(struct blk_mq_hw_ctx *hctx, struct list_head *list); bool blk_mq_hctx_has_pending(struct blk_mq_hw_ctx *hctx); bool blk_mq_get_driver_tag(struct request *rq, struct blk_mq_hw_ctx **hctx, bool wait); +struct request *blk_mq_dispatch_rq_from_ctx(struct blk_mq_hw_ctx *hctx, + struct blk_mq_ctx *start); /* * Internal helpers for allocating/freeing the request map -- 2.9.4