All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/2] block, iomap: disable iopoll for split bio
@ 2020-10-16  9:18 Jeffle Xu
  2020-10-16  9:18 ` [PATCH v3 1/2] block: " Jeffle Xu
  2020-10-16  9:18 ` [PATCH v3 2/2] block,iomap: disable iopoll when split needed Jeffle Xu
  0 siblings, 2 replies; 8+ messages in thread
From: Jeffle Xu @ 2020-10-16  9:18 UTC (permalink / raw)
  To: axboe, hch, viro
  Cc: linux-fsdevel, linux-block, ming.lei, joseph.qi, xiaoguang.wang

This patchset is to fix the potential hang occurred in sync polling.

Please refer the following link for background info and the v1 patch:
https://patchwork.kernel.org/project/linux-block/patch/20201013084051.27255-1-jefflexu@linux.alibaba.com/

The first patch disables iopoll for split bio in block layer, which is
suggested by Ming Lei.

The second patch disables iopoll when one dio need to be split into
multiple bios.



changes since v2:
- tune the line length of patch 1
- fix the condition checking whether split needed in patch 2

changes since v1:
- adopt the fix suggested by Ming Lei, to disable iopoll for split bio directly
- disable iopoll in direct IO routine of blkdev fs and iomap

Jeffle Xu (2):
  block: disable iopoll for split bio
  block,iomap: disable iopoll when split needed

 block/blk-merge.c    | 14 ++++++++++++++
 fs/block_dev.c       |  7 +++++++
 fs/iomap/direct-io.c |  8 ++++++++
 3 files changed, 29 insertions(+)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v3 1/2] block: disable iopoll for split bio
  2020-10-16  9:18 [PATCH v3 0/2] block, iomap: disable iopoll for split bio Jeffle Xu
@ 2020-10-16  9:18 ` Jeffle Xu
  2020-10-16 12:51   ` Ming Lei
  2020-10-16  9:18 ` [PATCH v3 2/2] block,iomap: disable iopoll when split needed Jeffle Xu
  1 sibling, 1 reply; 8+ messages in thread
From: Jeffle Xu @ 2020-10-16  9:18 UTC (permalink / raw)
  To: axboe, hch, viro
  Cc: linux-fsdevel, linux-block, ming.lei, joseph.qi, xiaoguang.wang

iopoll is initially for small size, latency sensitive IO. It doesn't
work well for big IO, especially when it needs to be split to multiple
bios. In this case, the returned cookie of __submit_bio_noacct_mq() is
indeed the cookie of the last split bio. The completion of *this* last
split bio done by iopoll doesn't mean the whole original bio has
completed. Callers of iopoll still need to wait for completion of other
split bios.

Besides bio splitting may cause more trouble for iopoll which isn't
supposed to be used in case of big IO.

iopoll for split bio may cause potential race if CPU migration happens
during bio submission. Since the returned cookie is that of the last
split bio, polling on the corresponding hardware queue doesn't help
complete other split bios, if these split bios are enqueued into
different hardware queues. Since interrupts are disabled for polling
queues, the completion of these other split bios depends on timeout
mechanism, thus causing a potential hang.

iopoll for split bio may also cause hang for sync polling. Currently
both the blkdev and iomap-based fs (ext4/xfs, etc) support sync polling
in direct IO routine. These routines will submit bio without REQ_NOWAIT
flag set, and then start sync polling in current process context. The
process may hang in blk_mq_get_tag() if the submitted bio has to be
split into multiple bios and can rapidly exhaust the queue depth. The
process are waiting for the completion of the previously allocated
requests, which should be reaped by the following polling, and thus
causing a deadlock.

To avoid these subtle trouble described above, just disable iopoll for
split bio.

Suggested-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 block/blk-merge.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index bcf5e4580603..924db7c428b4 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -279,6 +279,20 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
 	return NULL;
 split:
 	*segs = nsegs;
+
+	/*
+	 * bio splitting may cause more trouble for iopoll which isn't supposed
+	 * to be used in case of big IO.
+	 * iopoll is initially for small size, latency sensitive IO. It doesn't
+	 * work well for big IO, especially when it needs to be split to multiple
+	 * bios. In this case, the returned cookie of __submit_bio_noacct_mq()
+	 * is indeed the cookie of the last split bio. The completion of *this*
+	 * last split bio done by iopoll doesn't mean the whole original bio has
+	 * completed. Callers of iopoll still need to wait for completion of
+	 * other split bios.
+	 */
+	bio->bi_opf &= ~REQ_HIPRI;
+
 	return bio_split(bio, sectors, GFP_NOIO, bs);
 }
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 2/2] block,iomap: disable iopoll when split needed
  2020-10-16  9:18 [PATCH v3 0/2] block, iomap: disable iopoll for split bio Jeffle Xu
  2020-10-16  9:18 ` [PATCH v3 1/2] block: " Jeffle Xu
@ 2020-10-16  9:18 ` Jeffle Xu
  2020-10-16 10:26   ` Ming Lei
  1 sibling, 1 reply; 8+ messages in thread
From: Jeffle Xu @ 2020-10-16  9:18 UTC (permalink / raw)
  To: axboe, hch, viro
  Cc: linux-fsdevel, linux-block, ming.lei, joseph.qi, xiaoguang.wang

Both blkdev fs and iomap-based fs (ext4, xfs, etc.) currently support
sync iopoll. One single bio can contain at most BIO_MAX_PAGES, i.e. 256
bio_vec. If the input iov_iter contains more than 256 segments, then
one dio will be split into multiple bios, which may cause potential
deadlock for sync iopoll.

When it comes to sync iopoll, the bio is submitted without REQ_NOWAIT
flag set and the process may hang in blk_mq_get_tag() if the dio needs
to be split into multiple bios and thus can rapidly exhausts the queue
depth. The process has to wait for the completion of the previously
allocated requests, which should be reaped by the following sync
polling, and thus causing a deadlock.

In fact there's a subtle difference of handling of HIPRI IO between
blkdev fs and iomap-based fs, when dio need to be split into multiple
bios. blkdev fs will set REQ_HIPRI for only the last split bio, leaving
the previous bios queued into normal hardware queues, and not causing
the trouble described above. iomap-based fs will set REQ_HIPRI for all
split bios, and thus may cause the potential deadlock decribed above.

Thus disable iopoll when one dio need to be split into multiple bios.
Though blkdev fs may not suffer this issue, still it may not make much
sense to iopoll for big IO, since iopoll is initially for small size,
latency sensitive IO.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/block_dev.c       | 7 +++++++
 fs/iomap/direct-io.c | 8 ++++++++
 2 files changed, 15 insertions(+)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 9e84b1928b94..1b56b39e35b5 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -436,6 +436,13 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
 			break;
 		}
 
+		/*
+		 * The current dio need to be split into multiple bios here.
+		 * iopoll is initially for small size, latency sensitive IO,
+		 * and thus disable iopoll if split needed.
+		 */
+		iocb->ki_flags &= ~IOCB_HIPRI;
+
 		if (!dio->multi_bio) {
 			/*
 			 * AIO needs an extra reference to ensure the dio
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index c1aafb2ab990..46668cceefd2 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -308,6 +308,14 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
 		copied += n;
 
 		nr_pages = iov_iter_npages(dio->submit.iter, BIO_MAX_PAGES);
+		/*
+		 * The current dio need to be split into multiple bios here.
+		 * iopoll is initially for small size, latency sensitive IO,
+		 * and thus disable iopoll if split needed.
+		 */
+		if (nr_pages)
+			dio->iocb->ki_flags &= ~IOCB_HIPRI;
+
 		iomap_dio_submit_bio(dio, iomap, bio, pos);
 		pos += n;
 	} while (nr_pages);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/2] block,iomap: disable iopoll when split needed
  2020-10-16  9:18 ` [PATCH v3 2/2] block,iomap: disable iopoll when split needed Jeffle Xu
@ 2020-10-16 10:26   ` Ming Lei
  2020-10-16 11:02     ` JeffleXu
  0 siblings, 1 reply; 8+ messages in thread
From: Ming Lei @ 2020-10-16 10:26 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: axboe, hch, viro, linux-fsdevel, linux-block, joseph.qi, xiaoguang.wang

On Fri, Oct 16, 2020 at 05:18:51PM +0800, Jeffle Xu wrote:
> Both blkdev fs and iomap-based fs (ext4, xfs, etc.) currently support
> sync iopoll. One single bio can contain at most BIO_MAX_PAGES, i.e. 256
> bio_vec. If the input iov_iter contains more than 256 segments, then
> one dio will be split into multiple bios, which may cause potential
> deadlock for sync iopoll.
> 
> When it comes to sync iopoll, the bio is submitted without REQ_NOWAIT
> flag set and the process may hang in blk_mq_get_tag() if the dio needs
> to be split into multiple bios and thus can rapidly exhausts the queue
> depth. The process has to wait for the completion of the previously
> allocated requests, which should be reaped by the following sync
> polling, and thus causing a deadlock.
> 
> In fact there's a subtle difference of handling of HIPRI IO between
> blkdev fs and iomap-based fs, when dio need to be split into multiple
> bios. blkdev fs will set REQ_HIPRI for only the last split bio, leaving
> the previous bios queued into normal hardware queues, and not causing
> the trouble described above. iomap-based fs will set REQ_HIPRI for all
> split bios, and thus may cause the potential deadlock decribed above.
> 
> Thus disable iopoll when one dio need to be split into multiple bios.
> Though blkdev fs may not suffer this issue, still it may not make much
> sense to iopoll for big IO, since iopoll is initially for small size,
> latency sensitive IO.
> 
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
> ---
>  fs/block_dev.c       | 7 +++++++
>  fs/iomap/direct-io.c | 8 ++++++++
>  2 files changed, 15 insertions(+)
> 
> diff --git a/fs/block_dev.c b/fs/block_dev.c
> index 9e84b1928b94..1b56b39e35b5 100644
> --- a/fs/block_dev.c
> +++ b/fs/block_dev.c
> @@ -436,6 +436,13 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
>  			break;
>  		}
>  
> +		/*
> +		 * The current dio need to be split into multiple bios here.
> +		 * iopoll is initially for small size, latency sensitive IO,
> +		 * and thus disable iopoll if split needed.
> +		 */
> +		iocb->ki_flags &= ~IOCB_HIPRI;
> +

Not sure if it is good to clear IOCB_HIPRI of iocb, since it is usually
maintained by upper layer code(io_uring, aio, ...) and we shouldn't
touch this flag here.

>  		if (!dio->multi_bio) {
>  			/*
>  			 * AIO needs an extra reference to ensure the dio
> diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
> index c1aafb2ab990..46668cceefd2 100644
> --- a/fs/iomap/direct-io.c
> +++ b/fs/iomap/direct-io.c
> @@ -308,6 +308,14 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
>  		copied += n;
>  
>  		nr_pages = iov_iter_npages(dio->submit.iter, BIO_MAX_PAGES);
> +		/*
> +		 * The current dio need to be split into multiple bios here.
> +		 * iopoll is initially for small size, latency sensitive IO,
> +		 * and thus disable iopoll if split needed.
> +		 */
> +		if (nr_pages)
> +			dio->iocb->ki_flags &= ~IOCB_HIPRI;

Same concern as above.

Thanks,
Ming


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/2] block,iomap: disable iopoll when split needed
  2020-10-16 10:26   ` Ming Lei
@ 2020-10-16 11:02     ` JeffleXu
  2020-10-16 12:39       ` Ming Lei
  0 siblings, 1 reply; 8+ messages in thread
From: JeffleXu @ 2020-10-16 11:02 UTC (permalink / raw)
  To: Ming Lei
  Cc: axboe, hch, viro, linux-fsdevel, linux-block, joseph.qi, xiaoguang.wang


On 10/16/20 6:26 PM, Ming Lei wrote:
> On Fri, Oct 16, 2020 at 05:18:51PM +0800, Jeffle Xu wrote:
>> Both blkdev fs and iomap-based fs (ext4, xfs, etc.) currently support
>> sync iopoll. One single bio can contain at most BIO_MAX_PAGES, i.e. 256
>> bio_vec. If the input iov_iter contains more than 256 segments, then
>> one dio will be split into multiple bios, which may cause potential
>> deadlock for sync iopoll.
>>
>> When it comes to sync iopoll, the bio is submitted without REQ_NOWAIT
>> flag set and the process may hang in blk_mq_get_tag() if the dio needs
>> to be split into multiple bios and thus can rapidly exhausts the queue
>> depth. The process has to wait for the completion of the previously
>> allocated requests, which should be reaped by the following sync
>> polling, and thus causing a deadlock.
>>
>> In fact there's a subtle difference of handling of HIPRI IO between
>> blkdev fs and iomap-based fs, when dio need to be split into multiple
>> bios. blkdev fs will set REQ_HIPRI for only the last split bio, leaving
>> the previous bios queued into normal hardware queues, and not causing
>> the trouble described above. iomap-based fs will set REQ_HIPRI for all
>> split bios, and thus may cause the potential deadlock decribed above.
>>
>> Thus disable iopoll when one dio need to be split into multiple bios.
>> Though blkdev fs may not suffer this issue, still it may not make much
>> sense to iopoll for big IO, since iopoll is initially for small size,
>> latency sensitive IO.
>>
>> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
>> ---
>>   fs/block_dev.c       | 7 +++++++
>>   fs/iomap/direct-io.c | 8 ++++++++
>>   2 files changed, 15 insertions(+)
>>
>> diff --git a/fs/block_dev.c b/fs/block_dev.c
>> index 9e84b1928b94..1b56b39e35b5 100644
>> --- a/fs/block_dev.c
>> +++ b/fs/block_dev.c
>> @@ -436,6 +436,13 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
>>   			break;
>>   		}
>>   
>> +		/*
>> +		 * The current dio need to be split into multiple bios here.
>> +		 * iopoll is initially for small size, latency sensitive IO,
>> +		 * and thus disable iopoll if split needed.
>> +		 */
>> +		iocb->ki_flags &= ~IOCB_HIPRI;
>> +
> Not sure if it is good to clear IOCB_HIPRI of iocb, since it is usually
> maintained by upper layer code(io_uring, aio, ...) and we shouldn't
> touch this flag here.

If we queue bios into the DEFAULT hardware queue, but leaving the 
corresponding kiocb->ki_flags's

IOCB_HIPRI set (exactly what the first patch does), is this another
inconsistency?

Please consider the following code snippet from __blkdev_direct_IO()

```
	for (;;) {
		set_current_state(TASK_UNINTERRUPTIBLE);
		if (!READ_ONCE(dio->waiter))
			break;

		if (!(iocb->ki_flags & IOCB_HIPRI) ||
		    !blk_poll(bdev_get_queue(bdev), qc, true))
			blk_io_schedule();
	}
```

The IOCB_HIPRI flag is still set in iocb->ki_flags, but the corresponding
bios are queued into DEFAULT hardware queue since the first patch.
blk_poll() is still called in this case.


>
>>   		if (!dio->multi_bio) {
>>   			/*
>>   			 * AIO needs an extra reference to ensure the dio
>> diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
>> index c1aafb2ab990..46668cceefd2 100644
>> --- a/fs/iomap/direct-io.c
>> +++ b/fs/iomap/direct-io.c
>> @@ -308,6 +308,14 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
>>   		copied += n;
>>   
>>   		nr_pages = iov_iter_npages(dio->submit.iter, BIO_MAX_PAGES);
>> +		/*
>> +		 * The current dio need to be split into multiple bios here.
>> +		 * iopoll is initially for small size, latency sensitive IO,
>> +		 * and thus disable iopoll if split needed.
>> +		 */
>> +		if (nr_pages)
>> +			dio->iocb->ki_flags &= ~IOCB_HIPRI;
> Same concern as above.
>
> Thanks,
> Ming

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/2] block,iomap: disable iopoll when split needed
  2020-10-16 11:02     ` JeffleXu
@ 2020-10-16 12:39       ` Ming Lei
  2020-10-16 13:30         ` JeffleXu
  0 siblings, 1 reply; 8+ messages in thread
From: Ming Lei @ 2020-10-16 12:39 UTC (permalink / raw)
  To: JeffleXu
  Cc: axboe, hch, viro, linux-fsdevel, linux-block, joseph.qi, xiaoguang.wang

On Fri, Oct 16, 2020 at 07:02:44PM +0800, JeffleXu wrote:
> 
> On 10/16/20 6:26 PM, Ming Lei wrote:
> > On Fri, Oct 16, 2020 at 05:18:51PM +0800, Jeffle Xu wrote:
> > > Both blkdev fs and iomap-based fs (ext4, xfs, etc.) currently support
> > > sync iopoll. One single bio can contain at most BIO_MAX_PAGES, i.e. 256
> > > bio_vec. If the input iov_iter contains more than 256 segments, then
> > > one dio will be split into multiple bios, which may cause potential
> > > deadlock for sync iopoll.
> > > 
> > > When it comes to sync iopoll, the bio is submitted without REQ_NOWAIT
> > > flag set and the process may hang in blk_mq_get_tag() if the dio needs
> > > to be split into multiple bios and thus can rapidly exhausts the queue
> > > depth. The process has to wait for the completion of the previously
> > > allocated requests, which should be reaped by the following sync
> > > polling, and thus causing a deadlock.
> > > 
> > > In fact there's a subtle difference of handling of HIPRI IO between
> > > blkdev fs and iomap-based fs, when dio need to be split into multiple
> > > bios. blkdev fs will set REQ_HIPRI for only the last split bio, leaving
> > > the previous bios queued into normal hardware queues, and not causing
> > > the trouble described above. iomap-based fs will set REQ_HIPRI for all
> > > split bios, and thus may cause the potential deadlock decribed above.
> > > 
> > > Thus disable iopoll when one dio need to be split into multiple bios.
> > > Though blkdev fs may not suffer this issue, still it may not make much
> > > sense to iopoll for big IO, since iopoll is initially for small size,
> > > latency sensitive IO.
> > > 
> > > Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
> > > ---
> > >   fs/block_dev.c       | 7 +++++++
> > >   fs/iomap/direct-io.c | 8 ++++++++
> > >   2 files changed, 15 insertions(+)
> > > 
> > > diff --git a/fs/block_dev.c b/fs/block_dev.c
> > > index 9e84b1928b94..1b56b39e35b5 100644
> > > --- a/fs/block_dev.c
> > > +++ b/fs/block_dev.c
> > > @@ -436,6 +436,13 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
> > >   			break;
> > >   		}
> > > +		/*
> > > +		 * The current dio need to be split into multiple bios here.
> > > +		 * iopoll is initially for small size, latency sensitive IO,
> > > +		 * and thus disable iopoll if split needed.
> > > +		 */
> > > +		iocb->ki_flags &= ~IOCB_HIPRI;
> > > +
> > Not sure if it is good to clear IOCB_HIPRI of iocb, since it is usually
> > maintained by upper layer code(io_uring, aio, ...) and we shouldn't
> > touch this flag here.
> 
> If we queue bios into the DEFAULT hardware queue, but leaving the
> corresponding kiocb->ki_flags's
> 
> IOCB_HIPRI set (exactly what the first patch does), is this another
> inconsistency?

My question is that if it is good for this code to clear IOCB_HIPRI of iocb,
given this is the 1st such usage. And does io_uring implementation expect
the flag to be cleared by lower layer?

> 
> Please consider the following code snippet from __blkdev_direct_IO()
> 
> ```
> 	for (;;) {
> 		set_current_state(TASK_UNINTERRUPTIBLE);
> 		if (!READ_ONCE(dio->waiter))
> 			break;
> 
> 		if (!(iocb->ki_flags & IOCB_HIPRI) ||
> 		    !blk_poll(bdev_get_queue(bdev), qc, true))
> 			blk_io_schedule();
> 	}
> ```
> 
> The IOCB_HIPRI flag is still set in iocb->ki_flags, but the corresponding
> bios are queued into DEFAULT hardware queue since the first patch.
> blk_poll() is still called in this case.

It may be handled in the following way:

 		if (!((iocb->ki_flags & IOCB_HIPRI) && !dio->multi_bio) ||
 		    !blk_poll(bdev_get_queue(bdev), qc, true))
 				blk_io_schedule();

BTW, even for single bio with IOCB_HIPRI, the single fs bio can still be
splitted, and blk_poll() will be called too.


Thanks, 
Ming


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/2] block: disable iopoll for split bio
  2020-10-16  9:18 ` [PATCH v3 1/2] block: " Jeffle Xu
@ 2020-10-16 12:51   ` Ming Lei
  0 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2020-10-16 12:51 UTC (permalink / raw)
  To: Jeffle Xu
  Cc: axboe, hch, viro, linux-fsdevel, linux-block, joseph.qi, xiaoguang.wang

On Fri, Oct 16, 2020 at 05:18:50PM +0800, Jeffle Xu wrote:
> iopoll is initially for small size, latency sensitive IO. It doesn't
> work well for big IO, especially when it needs to be split to multiple
> bios. In this case, the returned cookie of __submit_bio_noacct_mq() is
> indeed the cookie of the last split bio. The completion of *this* last
> split bio done by iopoll doesn't mean the whole original bio has
> completed. Callers of iopoll still need to wait for completion of other
> split bios.
> 
> Besides bio splitting may cause more trouble for iopoll which isn't
> supposed to be used in case of big IO.
> 
> iopoll for split bio may cause potential race if CPU migration happens
> during bio submission. Since the returned cookie is that of the last
> split bio, polling on the corresponding hardware queue doesn't help
> complete other split bios, if these split bios are enqueued into
> different hardware queues. Since interrupts are disabled for polling
> queues, the completion of these other split bios depends on timeout
> mechanism, thus causing a potential hang.
> 
> iopoll for split bio may also cause hang for sync polling. Currently
> both the blkdev and iomap-based fs (ext4/xfs, etc) support sync polling
> in direct IO routine. These routines will submit bio without REQ_NOWAIT
> flag set, and then start sync polling in current process context. The
> process may hang in blk_mq_get_tag() if the submitted bio has to be
> split into multiple bios and can rapidly exhaust the queue depth. The
> process are waiting for the completion of the previously allocated
> requests, which should be reaped by the following polling, and thus
> causing a deadlock.
> 
> To avoid these subtle trouble described above, just disable iopoll for
> split bio.
> 
> Suggested-by: Ming Lei <ming.lei@redhat.com>
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
> ---
>  block/blk-merge.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index bcf5e4580603..924db7c428b4 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -279,6 +279,20 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
>  	return NULL;
>  split:
>  	*segs = nsegs;
> +
> +	/*
> +	 * bio splitting may cause more trouble for iopoll which isn't supposed
> +	 * to be used in case of big IO.
> +	 * iopoll is initially for small size, latency sensitive IO. It doesn't
> +	 * work well for big IO, especially when it needs to be split to multiple
> +	 * bios. In this case, the returned cookie of __submit_bio_noacct_mq()
> +	 * is indeed the cookie of the last split bio. The completion of *this*
> +	 * last split bio done by iopoll doesn't mean the whole original bio has
> +	 * completed. Callers of iopoll still need to wait for completion of
> +	 * other split bios.
> +	 */
> +	bio->bi_opf &= ~REQ_HIPRI;
> +
>  	return bio_split(bio, sectors, GFP_NOIO, bs);
>  }

The above change may not be enough, since caller of submit_bio() still
can call into blk_poll() even though REQ_HIPRI is cleared for splitted
bio, for avoiding this issue:

- Either we may add check in blk_poll() to only allow hctx with HCTX_TYPE_POLL
to poll,
- or return BLK_QC_T_NONE from blk_mq_submit_bio() if REQ_HIPRI is cleared.



thanks,
Ming


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/2] block,iomap: disable iopoll when split needed
  2020-10-16 12:39       ` Ming Lei
@ 2020-10-16 13:30         ` JeffleXu
  0 siblings, 0 replies; 8+ messages in thread
From: JeffleXu @ 2020-10-16 13:30 UTC (permalink / raw)
  To: Ming Lei
  Cc: axboe, hch, viro, linux-fsdevel, linux-block, joseph.qi, xiaoguang.wang


On 10/16/20 8:39 PM, Ming Lei wrote:
> On Fri, Oct 16, 2020 at 07:02:44PM +0800, JeffleXu wrote:
>> On 10/16/20 6:26 PM, Ming Lei wrote:
>>> On Fri, Oct 16, 2020 at 05:18:51PM +0800, Jeffle Xu wrote:
>>>> Both blkdev fs and iomap-based fs (ext4, xfs, etc.) currently support
>>>> sync iopoll. One single bio can contain at most BIO_MAX_PAGES, i.e. 256
>>>> bio_vec. If the input iov_iter contains more than 256 segments, then
>>>> one dio will be split into multiple bios, which may cause potential
>>>> deadlock for sync iopoll.
>>>>
>>>> When it comes to sync iopoll, the bio is submitted without REQ_NOWAIT
>>>> flag set and the process may hang in blk_mq_get_tag() if the dio needs
>>>> to be split into multiple bios and thus can rapidly exhausts the queue
>>>> depth. The process has to wait for the completion of the previously
>>>> allocated requests, which should be reaped by the following sync
>>>> polling, and thus causing a deadlock.
>>>>
>>>> In fact there's a subtle difference of handling of HIPRI IO between
>>>> blkdev fs and iomap-based fs, when dio need to be split into multiple
>>>> bios. blkdev fs will set REQ_HIPRI for only the last split bio, leaving
>>>> the previous bios queued into normal hardware queues, and not causing
>>>> the trouble described above. iomap-based fs will set REQ_HIPRI for all
>>>> split bios, and thus may cause the potential deadlock decribed above.
>>>>
>>>> Thus disable iopoll when one dio need to be split into multiple bios.
>>>> Though blkdev fs may not suffer this issue, still it may not make much
>>>> sense to iopoll for big IO, since iopoll is initially for small size,
>>>> latency sensitive IO.
>>>>
>>>> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
>>>> ---
>>>>    fs/block_dev.c       | 7 +++++++
>>>>    fs/iomap/direct-io.c | 8 ++++++++
>>>>    2 files changed, 15 insertions(+)
>>>>
>>>> diff --git a/fs/block_dev.c b/fs/block_dev.c
>>>> index 9e84b1928b94..1b56b39e35b5 100644
>>>> --- a/fs/block_dev.c
>>>> +++ b/fs/block_dev.c
>>>> @@ -436,6 +436,13 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
>>>>    			break;
>>>>    		}
>>>> +		/*
>>>> +		 * The current dio need to be split into multiple bios here.
>>>> +		 * iopoll is initially for small size, latency sensitive IO,
>>>> +		 * and thus disable iopoll if split needed.
>>>> +		 */
>>>> +		iocb->ki_flags &= ~IOCB_HIPRI;
>>>> +
>>> Not sure if it is good to clear IOCB_HIPRI of iocb, since it is usually
>>> maintained by upper layer code(io_uring, aio, ...) and we shouldn't
>>> touch this flag here.
>> If we queue bios into the DEFAULT hardware queue, but leaving the
>> corresponding kiocb->ki_flags's
>>
>> IOCB_HIPRI set (exactly what the first patch does), is this another
>> inconsistency?
> My question is that if it is good for this code to clear IOCB_HIPRI of iocb,
> given this is the 1st such usage. And does io_uring implementation expect
> the flag to be cleared by lower layer?

I know your point. I will check code in io_uring later.


>
>> Please consider the following code snippet from __blkdev_direct_IO()
>>
>> ```
>> 	for (;;) {
>> 		set_current_state(TASK_UNINTERRUPTIBLE);
>> 		if (!READ_ONCE(dio->waiter))
>> 			break;
>>
>> 		if (!(iocb->ki_flags & IOCB_HIPRI) ||
>> 		    !blk_poll(bdev_get_queue(bdev), qc, true))
>> 			blk_io_schedule();
>> 	}
>> ```
>>
>> The IOCB_HIPRI flag is still set in iocb->ki_flags, but the corresponding
>> bios are queued into DEFAULT hardware queue since the first patch.
>> blk_poll() is still called in this case.
> It may be handled in the following way:
>
>   		if (!((iocb->ki_flags & IOCB_HIPRI) && !dio->multi_bio) ||
>   		    !blk_poll(bdev_get_queue(bdev), qc, true))
>   				blk_io_schedule();
>
> BTW, even for single bio with IOCB_HIPRI, the single fs bio can still be
> splitted, and blk_poll() will be called too.
Yes that's exactly I'm concerned and I've seen your comments in patch 1. 
Thanks.
>
>
> Thanks,
> Ming

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-10-16 13:30 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-16  9:18 [PATCH v3 0/2] block, iomap: disable iopoll for split bio Jeffle Xu
2020-10-16  9:18 ` [PATCH v3 1/2] block: " Jeffle Xu
2020-10-16 12:51   ` Ming Lei
2020-10-16  9:18 ` [PATCH v3 2/2] block,iomap: disable iopoll when split needed Jeffle Xu
2020-10-16 10:26   ` Ming Lei
2020-10-16 11:02     ` JeffleXu
2020-10-16 12:39       ` Ming Lei
2020-10-16 13:30         ` JeffleXu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.