All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jeffle Xu <jefflexu@linux.alibaba.com>
To: axboe@kernel.dk, hch@infradead.org, ming.lei@redhat.com
Cc: linux-block@vger.kernel.org, io-uring@vger.kernel.org,
	joseph.qi@linux.alibaba.com
Subject: [PATCH v4 2/2] block,iomap: disable iopoll when split needed
Date: Tue, 17 Nov 2020 15:56:25 +0800	[thread overview]
Message-ID: <20201117075625.46118-3-jefflexu@linux.alibaba.com> (raw)
In-Reply-To: <20201117075625.46118-1-jefflexu@linux.alibaba.com>

Both blkdev fs and iomap-based fs (ext4, xfs, etc.) currently support
sync iopoll. One single bio can contain at most BIO_MAX_PAGES, i.e. 256
bio_vec. If the input iov_iter contains more than 256 segments, then
one dio will be split into multiple bios, which may cause potential
deadlock for sync iopoll.

When it comes to sync iopoll, the bio is submitted without REQ_NOWAIT
flag set and the process may hang in blk_mq_get_tag() if the dio needs
to be split into multiple bios and thus can rapidly exhausts the queue
depth. The process has to wait for the completion of the previously
allocated requests, which should be reaped by the following sync
polling, and thus causing a potential deadlock.

In fact there's a subtle difference of handling of HIPRI IO between
blkdev fs and iomap-based fs, when dio need to be split into multiple
bios. blkdev fs will set REQ_HIPRI for only the last split bio, leaving
the previous bios queued into normal hardware queues, and not causing
the trouble described above. iomap-based fs will set REQ_HIPRI for all
split bios, and thus may cause the potential deadlock described above.

Noted that though the analysis described above, currently blkdev fs and
iomap-based fs won't trigger this potential deadlock. Because only
preadv2(2)/pwritev2(2) are capable of *sync* polling as only these two
can set RWF_NOWAIT. Currently the maximum number of iovecs of one single
preadv2(2)/pwritev2(2) call is UIO_MAXIOV, i.e. 1024, while the minimum
queue depth is BLKDEV_MIN_RQ i.e. 4. That means one
preadv2(2)/pwritev2(2) call can submit at most 4 bios, which will fill
up the queue depth *exactly* and thus there's no deadlock in this case.

However this constraint can be fragile. Disable iopoll when one dio need
to be split into multiple bios.Though blkdev fs may not suffer this issue,
still it may not make much sense to iopoll for big IO, since iopoll is
initially for small size, latency sensitive IO.

Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
---
 fs/block_dev.c       |  9 +++++++++
 fs/iomap/direct-io.c | 10 ++++++++++
 2 files changed, 19 insertions(+)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 9e84b1928b94..ed3f46e8fa91 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -436,6 +436,15 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
 			break;
 		}
 
+		/*
+		 * The current dio needs to be split into multiple bios here.
+		 * iopoll for split bio will cause subtle trouble such as
+		 * hang when doing sync polling, while iopoll is initially
+		 * for small size, latency sensitive IO. Thus disable iopoll
+		 * if split needed.
+		 */
+		iocb->ki_flags &= ~IOCB_HIPRI;
+
 		if (!dio->multi_bio) {
 			/*
 			 * AIO needs an extra reference to ensure the dio
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index 933f234d5bec..396ac0f91a43 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -309,6 +309,16 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
 		copied += n;
 
 		nr_pages = iov_iter_npages(dio->submit.iter, BIO_MAX_PAGES);
+		/*
+		 * The current dio needs to be split into multiple bios here.
+		 * iopoll for split bio will cause subtle trouble such as
+		 * hang when doing sync polling, while iopoll is initially
+		 * for small size, latency sensitive IO. Thus disable iopoll
+		 * if split needed.
+		 */
+		if (nr_pages)
+			dio->iocb->ki_flags &= ~IOCB_HIPRI;
+
 		iomap_dio_submit_bio(dio, iomap, bio, pos);
 		pos += n;
 	} while (nr_pages);
-- 
2.27.0


  parent reply	other threads:[~2020-11-17  7:57 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-17  7:56 [PATCH v4 0/2] block, iomap: disable iopoll for split bio Jeffle Xu
2020-11-17  7:56 ` [PATCH v4 1/2] block: " Jeffle Xu
2020-11-19  3:06   ` JeffleXu
2020-11-19 17:52   ` Christoph Hellwig
2020-11-20  9:22     ` JeffleXu
2020-11-17  7:56 ` Jeffle Xu [this message]
2020-11-17 17:37   ` [PATCH v4 2/2] block,iomap: disable iopoll when split needed Darrick J. Wong
2020-11-18  1:56     ` JeffleXu
2020-11-19 17:55   ` Christoph Hellwig
2020-11-20 10:06     ` JeffleXu
2020-11-20 10:09       ` Fwd: " JeffleXu
2020-11-24 11:25       ` Christoph Hellwig
2020-11-25  7:03         ` JeffleXu
2020-11-17 12:51 ` [PATCH v4 0/2] block, iomap: disable iopoll for split bio JeffleXu
2020-11-18  9:50   ` JeffleXu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201117075625.46118-3-jefflexu@linux.alibaba.com \
    --to=jefflexu@linux.alibaba.com \
    --cc=axboe@kernel.dk \
    --cc=hch@infradead.org \
    --cc=io-uring@vger.kernel.org \
    --cc=joseph.qi@linux.alibaba.com \
    --cc=linux-block@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.