linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Jeffle Xu <jefflexu@linux.alibaba.com>
Cc: axboe@kernel.dk, hch@infradead.org, viro@zeniv.linux.org.uk,
	linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org,
	joseph.qi@linux.alibaba.com, xiaoguang.wang@linux.alibaba.com
Subject: Re: [PATCH v3 1/2] block: disable iopoll for split bio
Date: Fri, 16 Oct 2020 20:51:51 +0800	[thread overview]
Message-ID: <20201016125151.GC1218835@T590> (raw)
In-Reply-To: <20201016091851.93728-2-jefflexu@linux.alibaba.com>

On Fri, Oct 16, 2020 at 05:18:50PM +0800, Jeffle Xu wrote:
> iopoll is initially for small size, latency sensitive IO. It doesn't
> work well for big IO, especially when it needs to be split to multiple
> bios. In this case, the returned cookie of __submit_bio_noacct_mq() is
> indeed the cookie of the last split bio. The completion of *this* last
> split bio done by iopoll doesn't mean the whole original bio has
> completed. Callers of iopoll still need to wait for completion of other
> split bios.
> 
> Besides bio splitting may cause more trouble for iopoll which isn't
> supposed to be used in case of big IO.
> 
> iopoll for split bio may cause potential race if CPU migration happens
> during bio submission. Since the returned cookie is that of the last
> split bio, polling on the corresponding hardware queue doesn't help
> complete other split bios, if these split bios are enqueued into
> different hardware queues. Since interrupts are disabled for polling
> queues, the completion of these other split bios depends on timeout
> mechanism, thus causing a potential hang.
> 
> iopoll for split bio may also cause hang for sync polling. Currently
> both the blkdev and iomap-based fs (ext4/xfs, etc) support sync polling
> in direct IO routine. These routines will submit bio without REQ_NOWAIT
> flag set, and then start sync polling in current process context. The
> process may hang in blk_mq_get_tag() if the submitted bio has to be
> split into multiple bios and can rapidly exhaust the queue depth. The
> process are waiting for the completion of the previously allocated
> requests, which should be reaped by the following polling, and thus
> causing a deadlock.
> 
> To avoid these subtle trouble described above, just disable iopoll for
> split bio.
> 
> Suggested-by: Ming Lei <ming.lei@redhat.com>
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
> ---
>  block/blk-merge.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index bcf5e4580603..924db7c428b4 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -279,6 +279,20 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
>  	return NULL;
>  split:
>  	*segs = nsegs;
> +
> +	/*
> +	 * bio splitting may cause more trouble for iopoll which isn't supposed
> +	 * to be used in case of big IO.
> +	 * iopoll is initially for small size, latency sensitive IO. It doesn't
> +	 * work well for big IO, especially when it needs to be split to multiple
> +	 * bios. In this case, the returned cookie of __submit_bio_noacct_mq()
> +	 * is indeed the cookie of the last split bio. The completion of *this*
> +	 * last split bio done by iopoll doesn't mean the whole original bio has
> +	 * completed. Callers of iopoll still need to wait for completion of
> +	 * other split bios.
> +	 */
> +	bio->bi_opf &= ~REQ_HIPRI;
> +
>  	return bio_split(bio, sectors, GFP_NOIO, bs);
>  }

The above change may not be enough, since caller of submit_bio() still
can call into blk_poll() even though REQ_HIPRI is cleared for splitted
bio, for avoiding this issue:

- Either we may add check in blk_poll() to only allow hctx with HCTX_TYPE_POLL
to poll,
- or return BLK_QC_T_NONE from blk_mq_submit_bio() if REQ_HIPRI is cleared.



thanks,
Ming


  reply	other threads:[~2020-10-16 12:52 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-16  9:18 [PATCH v3 0/2] block, iomap: disable iopoll for split bio Jeffle Xu
2020-10-16  9:18 ` [PATCH v3 1/2] block: " Jeffle Xu
2020-10-16 12:51   ` Ming Lei [this message]
2020-10-16  9:18 ` [PATCH v3 2/2] block,iomap: disable iopoll when split needed Jeffle Xu
2020-10-16 10:26   ` Ming Lei
2020-10-16 11:02     ` JeffleXu
2020-10-16 12:39       ` Ming Lei
2020-10-16 13:30         ` JeffleXu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201016125151.GC1218835@T590 \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=hch@infradead.org \
    --cc=jefflexu@linux.alibaba.com \
    --cc=joseph.qi@linux.alibaba.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=viro@zeniv.linux.org.uk \
    --cc=xiaoguang.wang@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).