linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Damien Le Moal <damien.lemoal@opensource.wdc.com>
To: Bart Van Assche <bvanassche@acm.org>, Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org, Jaegeuk Kim <jaegeuk@kernel.org>,
	Avri Altman <avri.altman@wdc.com>,
	Damien Le Moal <damien.lemoal@wdc.com>
Subject: Re: [PATCH 4/8] block/mq-deadline: Only use zone locking if necessary
Date: Tue, 10 Jan 2023 08:46:12 +0900	[thread overview]
Message-ID: <92096c6d-fe0a-7b5b-222f-c532286c0c8b@opensource.wdc.com> (raw)
In-Reply-To: <20230109232738.169886-5-bvanassche@acm.org>

On 1/10/23 08:27, Bart Van Assche wrote:
> Measurements have shown that limiting the queue depth to one for zoned
> writes has a significant negative performance impact on zoned UFS devices.
> Hence this patch that disables zone locking from the mq-deadline scheduler
> for storage controllers that support pipelining zoned writes. This patch is
> based on the following assumptions:
> - Applications submit write requests to sequential write required zones
>   in order.
> - It happens infrequently that zoned write requests are reordered by the
>   block layer.
> - The storage controller does not reorder write requests that have been
>   submitted to the same hardware queue. This is the case for UFS: the
>   UFSHCI specification requires that UFS controllers process requests in
>   order per hardware queue.
> - The I/O priority of all pipelined write requests is the same per zone.
> - Either no I/O scheduler is used or an I/O scheduler is used that
>   submits write requests per zone in LBA order.
> 
> If applications submit write requests to sequential write required zones
> in order, at least one of the pending requests will succeed. Hence, the
> number of retries that is required is at most (number of pending
> requests) - 1.

But if the mid-layer decides to requeue a write request, the workqueue
used in the mq block layer for requeuing is going to completely destroy
write ordering as that is outside of the submission path, working in
parallel with it... Does blk_queue_pipeline_zoned_writes() == true also
guarantee that a write request will *never* be requeued before hitting the
adapter/device ?

> 
> Cc: Damien Le Moal <damien.lemoal@wdc.com>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  block/blk-zoned.c   |  3 ++-
>  block/mq-deadline.c | 14 +++++++++-----
>  2 files changed, 11 insertions(+), 6 deletions(-)
> 
> diff --git a/block/blk-zoned.c b/block/blk-zoned.c
> index db829401d8d0..158638091e39 100644
> --- a/block/blk-zoned.c
> +++ b/block/blk-zoned.c
> @@ -520,7 +520,8 @@ static int blk_revalidate_zone_cb(struct blk_zone *zone, unsigned int idx,
>  		break;
>  	case BLK_ZONE_TYPE_SEQWRITE_REQ:
>  	case BLK_ZONE_TYPE_SEQWRITE_PREF:
> -		if (!args->seq_zones_wlock) {
> +		if (!blk_queue_pipeline_zoned_writes(q) &&
> +		    !args->seq_zones_wlock) {
>  			args->seq_zones_wlock =
>  				blk_alloc_zone_bitmap(q->node, args->nr_zones);
>  			if (!args->seq_zones_wlock)
> diff --git a/block/mq-deadline.c b/block/mq-deadline.c
> index f10c2a0d18d4..e41808c0b007 100644
> --- a/block/mq-deadline.c
> +++ b/block/mq-deadline.c
> @@ -339,7 +339,8 @@ deadline_fifo_request(struct deadline_data *dd, struct dd_per_prio *per_prio,
>  		return NULL;
>  
>  	rq = rq_entry_fifo(per_prio->fifo_list[data_dir].next);
> -	if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q))
> +	if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q) ||
> +	    blk_queue_pipeline_zoned_writes(rq->q))
>  		return rq;
>  
>  	/*
> @@ -378,7 +379,8 @@ deadline_next_request(struct deadline_data *dd, struct dd_per_prio *per_prio,
>  	if (!rq)
>  		return NULL;
>  
> -	if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q))
> +	if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q) ||
> +	    blk_queue_pipeline_zoned_writes(rq->q))
>  		return rq;
>  
>  	/*
> @@ -503,8 +505,9 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd,
>  	}
>  
>  	/*
> -	 * For a zoned block device, if we only have writes queued and none of
> -	 * them can be dispatched, rq will be NULL.
> +	 * For a zoned block device that requires write serialization, if we
> +	 * only have writes queued and none of them can be dispatched, rq will
> +	 * be NULL.
>  	 */
>  	if (!rq)
>  		return NULL;
> @@ -893,7 +896,8 @@ static void dd_finish_request(struct request *rq)
>  
>  	atomic_inc(&per_prio->stats.completed);
>  
> -	if (blk_queue_is_zoned(q)) {
> +	if (blk_queue_is_zoned(rq->q) &&
> +	    !blk_queue_pipeline_zoned_writes(q)) {
>  		unsigned long flags;
>  
>  		spin_lock_irqsave(&dd->zone_lock, flags);

-- 
Damien Le Moal
Western Digital Research


  reply	other threads:[~2023-01-09 23:48 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-09 23:27 [PATCH 0/8] Enable zoned write pipelining for UFS devices Bart Van Assche
2023-01-09 23:27 ` [PATCH 1/8] block: Document blk_queue_zone_is_seq() and blk_rq_zone_is_seq() Bart Van Assche
2023-01-09 23:36   ` Damien Le Moal
2023-01-09 23:27 ` [PATCH 2/8] block: Introduce the blk_rq_is_seq_zone_write() function Bart Van Assche
2023-01-09 23:38   ` Damien Le Moal
2023-01-09 23:52     ` Bart Van Assche
2023-01-10  9:52       ` Niklas Cassel
2023-01-10 11:54         ` Damien Le Moal
2023-01-10 12:13           ` Niklas Cassel
2023-01-10 12:41             ` Damien Le Moal
2023-01-09 23:27 ` [PATCH 3/8] block: Introduce a request queue flag for pipelining zoned writes Bart Van Assche
2023-01-09 23:27 ` [PATCH 4/8] block/mq-deadline: Only use zone locking if necessary Bart Van Assche
2023-01-09 23:46   ` Damien Le Moal [this message]
2023-01-09 23:51     ` Bart Van Assche
2023-01-09 23:56       ` Damien Le Moal
2023-01-10  0:19         ` Bart Van Assche
2023-01-10  0:32           ` Damien Le Moal
2023-01-10  0:38             ` Jens Axboe
2023-01-10  0:41               ` Jens Axboe
2023-01-10  0:44                 ` Bart Van Assche
2023-01-10  0:48                   ` Jens Axboe
2023-01-10  0:56                     ` Bart Van Assche
2023-01-10  1:03                       ` Jens Axboe
2023-01-10  1:17                         ` Bart Van Assche
2023-01-10  1:48                           ` Jens Axboe
2023-01-10  2:24                     ` Damien Le Moal
2023-01-10  3:00                       ` Jens Axboe
2023-01-09 23:27 ` [PATCH 5/8] block/null_blk: Refactor null_queue_rq() Bart Van Assche
2023-01-09 23:27 ` [PATCH 6/8] block/null_blk: Add support for pipelining zoned writes Bart Van Assche
2023-01-09 23:27 ` [PATCH 7/8] scsi: Retry unaligned " Bart Van Assche
2023-01-09 23:51   ` Damien Le Moal
2023-01-09 23:55     ` Bart Van Assche
2023-01-09 23:27 ` [PATCH 8/8] scsi: ufs: Enable zoned write pipelining Bart Van Assche
2023-01-10  9:16   ` Avri Altman
2023-01-10 17:42     ` Bart Van Assche
2023-01-10 12:23   ` Bean Huo
2023-01-10 17:41     ` Bart Van Assche

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=92096c6d-fe0a-7b5b-222f-c532286c0c8b@opensource.wdc.com \
    --to=damien.lemoal@opensource.wdc.com \
    --cc=avri.altman@wdc.com \
    --cc=axboe@kernel.dk \
    --cc=bvanassche@acm.org \
    --cc=damien.lemoal@wdc.com \
    --cc=jaegeuk@kernel.org \
    --cc=linux-block@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).