Linux-Block Archive on lore.kernel.org
 help / color / Atom feed
From: Hannes Reinecke <hare@suse.de>
To: Christoph Hellwig <hch@lst.de>
Cc: linux-block@vger.kernel.org, John Garry <john.garry@huawei.com>,
	Bart Van Assche <bvanassche@acm.org>,
	Hannes Reinecke <hare@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH 4/6] blk-mq: open code __blk_mq_alloc_request in blk_mq_alloc_request_hctx
Date: Fri, 22 May 2020 11:17:14 +0200
Message-ID: <737b4159-7059-2b85-e870-d2c9a763d452@suse.de> (raw)
In-Reply-To: <20200520170635.2094101-5-hch@lst.de>

On 5/20/20 7:06 PM, Christoph Hellwig wrote:
> blk_mq_alloc_request_hctx is only used for NVMeoF connect commands, so
> tailor it to the specific requirements, and don't both the general

bother?

> fast path code with its special twinkles.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/blk-mq.c | 44 +++++++++++++++++++++++---------------------
>   1 file changed, 23 insertions(+), 21 deletions(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 1ffbc5d9e7cfe..42aee2978464b 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -351,21 +351,13 @@ static struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data)
>   {
>   	struct request_queue *q = data->q;
>   	struct elevator_queue *e = q->elevator;
> -	unsigned int tag;
> -	bool clear_ctx_on_error = false;
>   	u64 alloc_time_ns = 0;
> +	unsigned int tag;
>   
>   	/* alloc_time includes depth and tag waits */
>   	if (blk_queue_rq_alloc_time(q))
>   		alloc_time_ns = ktime_get_ns();
>   
> -	if (likely(!data->ctx)) {
> -		data->ctx = blk_mq_get_ctx(q);
> -		clear_ctx_on_error = true;
> -	}
> -	if (likely(!data->hctx))
> -		data->hctx = blk_mq_map_queue(q, data->cmd_flags,
> -						data->ctx);
>   	if (data->cmd_flags & REQ_NOWAIT)
>   		data->flags |= BLK_MQ_REQ_NOWAIT;
>   
> @@ -381,17 +373,16 @@ static struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data)
>   		    e->type->ops.limit_depth &&
>   		    !(data->flags & BLK_MQ_REQ_RESERVED))
>   			e->type->ops.limit_depth(data->cmd_flags, data);
> -	} else {
> -		blk_mq_tag_busy(data->hctx);
>   	}
>   
> +	data->ctx = blk_mq_get_ctx(q);
> +	data->hctx = blk_mq_map_queue(q, data->cmd_flags, data->ctx);
> +	if (!(data->flags & BLK_MQ_REQ_INTERNAL))
> +		blk_mq_tag_busy(data->hctx);
> +
>   	tag = blk_mq_get_tag(data);
> -	if (tag == BLK_MQ_TAG_FAIL) {
> -		if (clear_ctx_on_error)
> -			data->ctx = NULL;
> +	if (tag == BLK_MQ_TAG_FAIL)
>   		return NULL;
> -	}
> -
>   	return blk_mq_rq_ctx_init(data, tag, alloc_time_ns);
>   }
>   
> @@ -431,17 +422,22 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
>   		.flags		= flags,
>   		.cmd_flags	= op,
>   	};
> -	struct request *rq;
> +	u64 alloc_time_ns = 0;
>   	unsigned int cpu;
> +	unsigned int tag;
>   	int ret;
>   
> +	/* alloc_time includes depth and tag waits */
> +	if (blk_queue_rq_alloc_time(q))
> +		alloc_time_ns = ktime_get_ns();
> +
>   	/*
>   	 * If the tag allocator sleeps we could get an allocation for a
>   	 * different hardware context.  No need to complicate the low level
>   	 * allocator for this for the rare use case of a command tied to
>   	 * a specific queue.
>   	 */
> -	if (WARN_ON_ONCE(!(flags & BLK_MQ_REQ_NOWAIT)))
> +	if (WARN_ON_ONCE(!(flags & (BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED))))
>   		return ERR_PTR(-EINVAL);
>   
>   	if (hctx_idx >= q->nr_hw_queues)
> @@ -462,11 +458,17 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
>   	cpu = cpumask_first_and(data.hctx->cpumask, cpu_online_mask);
>   	data.ctx = __blk_mq_get_ctx(q, cpu);
>   
> +	if (q->elevator)
> +		data.flags |= BLK_MQ_REQ_INTERNAL;
> +	else
> +		blk_mq_tag_busy(data.hctx);
> +
>   	ret = -EWOULDBLOCK;
> -	rq = __blk_mq_alloc_request(&data);
> -	if (!rq)
> +	tag = blk_mq_get_tag(&data);
> +	if (tag == BLK_MQ_TAG_FAIL)
>   		goto out_queue_exit;
> -	return rq;
> +	return blk_mq_rq_ctx_init(&data, tag, alloc_time_ns);
> +
>   out_queue_exit:
>   	blk_queue_exit(q);
>   	return ERR_PTR(ret);
> 
Other than that:

Reviewed-by: Hannes Reinecke <hare@suse.de

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

  reply index

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-20 17:06 blk-mq: improvement CPU hotplug (simplified version) v3 Christoph Hellwig
2020-05-20 17:06 ` [PATCH 1/6] blk-mq: remove the bio argument to ->prepare_request Christoph Hellwig
2020-05-20 18:16   ` Bart Van Assche
2020-05-22  9:11   ` Hannes Reinecke
2020-05-20 17:06 ` [PATCH 2/6] blk-mq: simplify the blk_mq_get_request calling convention Christoph Hellwig
2020-05-20 18:22   ` Bart Van Assche
2020-05-22  9:13   ` Hannes Reinecke
2020-05-20 17:06 ` [PATCH 3/6] blk-mq: move more request initialization to blk_mq_rq_ctx_init Christoph Hellwig
2020-05-20 20:10   ` Bart Van Assche
2020-05-20 17:06 ` [PATCH 4/6] blk-mq: open code __blk_mq_alloc_request in blk_mq_alloc_request_hctx Christoph Hellwig
2020-05-22  9:17   ` Hannes Reinecke [this message]
2020-05-20 17:06 ` [PATCH 5/6] blk-mq: add blk_mq_all_tag_iter Christoph Hellwig
2020-05-20 20:24   ` Bart Van Assche
2020-05-27  6:05     ` Christoph Hellwig
2020-05-22  9:18   ` Hannes Reinecke
2020-05-20 17:06 ` [PATCH 6/6] blk-mq: drain I/O when all CPUs in a hctx are offline Christoph Hellwig
2020-05-22  9:25   ` Hannes Reinecke
2020-05-25  9:20     ` Ming Lei
2020-05-20 21:46 ` blk-mq: improvement CPU hotplug (simplified version) v3 Bart Van Assche
2020-05-21  2:57   ` Ming Lei
2020-05-21  3:50     ` Bart Van Assche
2020-05-21  4:33       ` Ming Lei
2020-05-21 19:15         ` Bart Van Assche
2020-05-22  2:39           ` Ming Lei
2020-05-22 14:47             ` Keith Busch
2020-05-23  3:05               ` Ming Lei
2020-05-23 15:19             ` Bart Van Assche
2020-05-25  4:09               ` Ming Lei
2020-05-25 15:32                 ` Bart Van Assche
2020-05-25 16:38                   ` Keith Busch
2020-05-26  0:37                   ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=737b4159-7059-2b85-e870-d2c9a763d452@suse.de \
    --to=hare@suse.de \
    --cc=bvanassche@acm.org \
    --cc=hare@suse.com \
    --cc=hch@lst.de \
    --cc=john.garry@huawei.com \
    --cc=linux-block@vger.kernel.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-Block Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-block/0 linux-block/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-block linux-block/ https://lore.kernel.org/linux-block \
		linux-block@vger.kernel.org
	public-inbox-index linux-block

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-block


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git