linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: yangerkun <yangerkun@huawei.com>
To: <hch@infradead.org>, <linux-xfs@vger.kernel.org>,
	<linux-fsdevel@vger.kernel.org>
Cc: <yangerkun@huawei.com>, <yi.zhang@huawei.com>, <houtao1@huawei.com>
Subject: [PATCH] iomap: fix the logic about poll io in iomap_dio_bio_actor
Date: Mon, 14 Oct 2019 22:43:13 +0800	[thread overview]
Message-ID: <20191014144313.26313-1-yangerkun@huawei.com> (raw)

Just set REQ_HIPRI for the last bio in iomap_dio_bio_actor. Because
multi bio created by this function can goto different cpu since this
process can be preempted by other process. And in iomap_dio_rw we will
just poll for the last bio. Fix it by only set polled for the last bio.

Signed-off-by: yangerkun <yangerkun@huawei.com>
---
 fs/iomap/direct-io.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index 1fc28c2da279..05dee6e7ca64 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -59,15 +59,16 @@ int iomap_dio_iopoll(struct kiocb *kiocb, bool spin)
 EXPORT_SYMBOL_GPL(iomap_dio_iopoll);
 
 static void iomap_dio_submit_bio(struct iomap_dio *dio, struct iomap *iomap,
-		struct bio *bio)
+		struct bio *bio, bool is_poll)
 {
 	atomic_inc(&dio->ref);
 
-	if (dio->iocb->ki_flags & IOCB_HIPRI)
+	if (is_poll) {
 		bio_set_polled(bio, dio->iocb);
-
-	dio->submit.last_queue = bdev_get_queue(iomap->bdev);
-	dio->submit.cookie = submit_bio(bio);
+		dio->submit.last_queue = bdev_get_queue(iomap->bdev);
+		dio->submit.cookie = submit_bio(bio);
+	} else
+		submit_bio(bio);
 }
 
 static ssize_t iomap_dio_complete(struct iomap_dio *dio)
@@ -191,7 +192,7 @@ iomap_dio_zero(struct iomap_dio *dio, struct iomap *iomap, loff_t pos,
 	get_page(page);
 	__bio_add_page(bio, page, len, 0);
 	bio_set_op_attrs(bio, REQ_OP_WRITE, flags);
-	iomap_dio_submit_bio(dio, iomap, bio);
+	iomap_dio_submit_bio(dio, iomap, bio, false);
 }
 
 static loff_t
@@ -255,6 +256,8 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
 
 	do {
 		size_t n;
+		bool is_poll = false;
+
 		if (dio->error) {
 			iov_iter_revert(dio->submit.iter, copied);
 			return 0;
@@ -301,7 +304,12 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
 		copied += n;
 
 		nr_pages = iov_iter_npages(&iter, BIO_MAX_PAGES);
-		iomap_dio_submit_bio(dio, iomap, bio);
+
+		/* Only set poll for the last bio. */
+		if (!nr_pages && dio->iocb->ki_flags & IOCB_HIPRI)
+			is_poll = true;
+
+		iomap_dio_submit_bio(dio, iomap, bio, is_poll);
 	} while (nr_pages);
 
 	/*
-- 
2.17.2


             reply	other threads:[~2019-10-14 14:21 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-14 14:43 yangerkun [this message]
2019-10-15  8:05 ` [PATCH] iomap: fix the logic about poll io in iomap_dio_bio_actor Christoph Hellwig
2019-10-15 13:50   ` yangerkun

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191014144313.26313-1-yangerkun@huawei.com \
    --to=yangerkun@huawei.com \
    --cc=hch@infradead.org \
    --cc=houtao1@huawei.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=yi.zhang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).