From: "Darrick J. Wong" <djwong@kernel.org> To: Christoph Hellwig <hch@lst.de> Cc: Jens Axboe <axboe@kernel.dk>, Jeffle Xu <jefflexu@linux.alibaba.com>, Ming Lei <ming.lei@redhat.com>, Damien Le Moal <Damien.LeMoal@wdc.com>, Keith Busch <kbusch@kernel.org>, Sagi Grimberg <sagi@grimberg.me>, "Wunderlich, Mark" <mark.wunderlich@intel.com>, "Vasudevan, Anil" <anil.vasudevan@intel.com>, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-nvme@lists.infradead.org Subject: Re: [PATCH 03/15] iomap: don't try to poll multi-bio I/Os in __iomap_dio_rw Date: Wed, 12 May 2021 14:32:30 -0700 [thread overview] Message-ID: <20210512213230.GB8543@magnolia> (raw) In-Reply-To: <20210512131545.495160-4-hch@lst.de> On Wed, May 12, 2021 at 03:15:33PM +0200, Christoph Hellwig wrote: > If an iocb is split into multiple bios we can't poll for both. So don't > bother to even try to poll in that case. > > Signed-off-by: Christoph Hellwig <hch@lst.de> Ahh the fun of reviewing things like iopoll that I'm not all /that/ familiar with... > --- > fs/iomap/direct-io.c | 17 +++++++++++++++++ > 1 file changed, 17 insertions(+) > > diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c > index 9398b8c31323..d5637f467109 100644 > --- a/fs/iomap/direct-io.c > +++ b/fs/iomap/direct-io.c > @@ -282,6 +282,13 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, > if (!iov_iter_count(dio->submit.iter)) > goto out; > > + /* > + * We can only poll for single bio I/Os. > + */ > + if (need_zeroout || > + ((dio->flags & IOMAP_DIO_WRITE) && pos >= i_size_read(inode))) Hm, is this logic here to catch the second iomap_dio_zero below the zero_tail: label? What happens if we have an unaligned direct write that starts below EOF but (for whatever reason) ends the loop with pos now above EOF but not on a block boundary? > + dio->iocb->ki_flags &= ~IOCB_HIPRI; > + > if (need_zeroout) { > /* zero out from the start of the block to the write offset */ > pad = pos & (fs_block_size - 1); > @@ -339,6 +346,11 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, > > nr_pages = bio_iov_vecs_to_alloc(dio->submit.iter, > BIO_MAX_VECS); > + /* > + * We can only poll for single bio I/Os. > + */ > + if (nr_pages) > + dio->iocb->ki_flags &= ~IOCB_HIPRI; > iomap_dio_submit_bio(dio, iomap, bio, pos); > pos += n; > } while (nr_pages); > @@ -579,6 +591,11 @@ __iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter, > iov_iter_revert(iter, pos - dio->i_size); > break; > } > + > + /* > + * We can only poll for single bio I/Os. > + */ > + iocb->ki_flags &= ~IOCB_HIPRI; Hmm, why is this here? Won't this clear IOCB_HIPRI even if the first iomap_apply call successfully creates a polled bio for the entire iovec such that we exit the loop one line later because count becomes zero? How does the upper layer (io_uring, I surmise?) know that there's a bio that it should poll for? <shrug> Unless the only effect that this has is making it so that the subsequent calls to iomap_apply don't have the polling mode set? I see enough places in io_uring.c that check (iocb->ki_flags & IOCB_HIPRI) to make me wonder if the lifetime of that flag ends as soon as we get to ->{read,write}_iter? --D > } while ((count = iov_iter_count(iter)) > 0); > blk_finish_plug(&plug); > > -- > 2.30.2 >
WARNING: multiple messages have this Message-ID (diff)
From: "Darrick J. Wong" <djwong@kernel.org> To: Christoph Hellwig <hch@lst.de> Cc: Jens Axboe <axboe@kernel.dk>, Jeffle Xu <jefflexu@linux.alibaba.com>, Ming Lei <ming.lei@redhat.com>, Damien Le Moal <Damien.LeMoal@wdc.com>, Keith Busch <kbusch@kernel.org>, Sagi Grimberg <sagi@grimberg.me>, "Wunderlich, Mark" <mark.wunderlich@intel.com>, "Vasudevan, Anil" <anil.vasudevan@intel.com>, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-nvme@lists.infradead.org Subject: Re: [PATCH 03/15] iomap: don't try to poll multi-bio I/Os in __iomap_dio_rw Date: Wed, 12 May 2021 14:32:30 -0700 [thread overview] Message-ID: <20210512213230.GB8543@magnolia> (raw) In-Reply-To: <20210512131545.495160-4-hch@lst.de> On Wed, May 12, 2021 at 03:15:33PM +0200, Christoph Hellwig wrote: > If an iocb is split into multiple bios we can't poll for both. So don't > bother to even try to poll in that case. > > Signed-off-by: Christoph Hellwig <hch@lst.de> Ahh the fun of reviewing things like iopoll that I'm not all /that/ familiar with... > --- > fs/iomap/direct-io.c | 17 +++++++++++++++++ > 1 file changed, 17 insertions(+) > > diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c > index 9398b8c31323..d5637f467109 100644 > --- a/fs/iomap/direct-io.c > +++ b/fs/iomap/direct-io.c > @@ -282,6 +282,13 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, > if (!iov_iter_count(dio->submit.iter)) > goto out; > > + /* > + * We can only poll for single bio I/Os. > + */ > + if (need_zeroout || > + ((dio->flags & IOMAP_DIO_WRITE) && pos >= i_size_read(inode))) Hm, is this logic here to catch the second iomap_dio_zero below the zero_tail: label? What happens if we have an unaligned direct write that starts below EOF but (for whatever reason) ends the loop with pos now above EOF but not on a block boundary? > + dio->iocb->ki_flags &= ~IOCB_HIPRI; > + > if (need_zeroout) { > /* zero out from the start of the block to the write offset */ > pad = pos & (fs_block_size - 1); > @@ -339,6 +346,11 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, > > nr_pages = bio_iov_vecs_to_alloc(dio->submit.iter, > BIO_MAX_VECS); > + /* > + * We can only poll for single bio I/Os. > + */ > + if (nr_pages) > + dio->iocb->ki_flags &= ~IOCB_HIPRI; > iomap_dio_submit_bio(dio, iomap, bio, pos); > pos += n; > } while (nr_pages); > @@ -579,6 +591,11 @@ __iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter, > iov_iter_revert(iter, pos - dio->i_size); > break; > } > + > + /* > + * We can only poll for single bio I/Os. > + */ > + iocb->ki_flags &= ~IOCB_HIPRI; Hmm, why is this here? Won't this clear IOCB_HIPRI even if the first iomap_apply call successfully creates a polled bio for the entire iovec such that we exit the loop one line later because count becomes zero? How does the upper layer (io_uring, I surmise?) know that there's a bio that it should poll for? <shrug> Unless the only effect that this has is making it so that the subsequent calls to iomap_apply don't have the polling mode set? I see enough places in io_uring.c that check (iocb->ki_flags & IOCB_HIPRI) to make me wonder if the lifetime of that flag ends as soon as we get to ->{read,write}_iter? --D > } while ((count = iov_iter_count(iter)) > 0); > blk_finish_plug(&plug); > > -- > 2.30.2 > _______________________________________________ Linux-nvme mailing list Linux-nvme@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2021-05-12 21:41 UTC|newest] Thread overview: 79+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-05-12 13:15 switch block layer polling to a bio based model v3 Christoph Hellwig 2021-05-12 13:15 ` Christoph Hellwig 2021-05-12 13:15 ` [PATCH 01/15] direct-io: remove blk_poll support Christoph Hellwig 2021-05-12 13:15 ` Christoph Hellwig 2021-05-12 13:15 ` [PATCH 02/15] block: don't try to poll multi-bio I/Os in __blkdev_direct_IO Christoph Hellwig 2021-05-12 13:15 ` Christoph Hellwig 2021-05-12 20:54 ` Sagi Grimberg 2021-05-12 20:54 ` Sagi Grimberg 2021-05-16 18:01 ` Kanchan Joshi 2021-05-16 18:01 ` Kanchan Joshi 2021-05-12 13:15 ` [PATCH 03/15] iomap: don't try to poll multi-bio I/Os in __iomap_dio_rw Christoph Hellwig 2021-05-12 13:15 ` Christoph Hellwig 2021-05-12 21:32 ` Darrick J. Wong [this message] 2021-05-12 21:32 ` Darrick J. Wong 2021-05-12 13:15 ` [PATCH 04/15] blk-mq: factor out a blk_qc_to_hctx helper Christoph Hellwig 2021-05-12 13:15 ` Christoph Hellwig 2021-05-12 20:58 ` Sagi Grimberg 2021-05-12 20:58 ` Sagi Grimberg 2021-05-12 13:15 ` [PATCH 05/15] blk-mq: factor out a "classic" poll helper Christoph Hellwig 2021-05-12 13:15 ` Christoph Hellwig 2021-05-12 20:59 ` Sagi Grimberg 2021-05-12 20:59 ` Sagi Grimberg 2021-05-12 13:15 ` [PATCH 06/15] blk-mq: remove blk_qc_t_to_tag and blk_qc_t_is_internal Christoph Hellwig 2021-05-12 13:15 ` Christoph Hellwig 2021-05-12 21:42 ` Sagi Grimberg 2021-05-12 21:42 ` Sagi Grimberg 2021-05-12 13:15 ` [PATCH 07/15] blk-mq: remove blk_qc_t_valid Christoph Hellwig 2021-05-12 13:15 ` Christoph Hellwig 2021-05-12 21:43 ` Sagi Grimberg 2021-05-12 21:43 ` Sagi Grimberg 2021-05-12 13:15 ` [PATCH 08/15] io_uring: don't sleep when polling for I/O Christoph Hellwig 2021-05-12 13:15 ` Christoph Hellwig 2021-05-12 21:55 ` Sagi Grimberg 2021-05-12 21:55 ` Sagi Grimberg 2021-05-19 13:02 ` Christoph Hellwig 2021-05-19 13:02 ` Christoph Hellwig 2021-05-12 13:15 ` [PATCH 09/15] block: rename REQ_HIPRI to REQ_POLLED Christoph Hellwig 2021-05-12 13:15 ` Christoph Hellwig 2021-05-12 21:56 ` Sagi Grimberg 2021-05-12 21:56 ` Sagi Grimberg 2021-05-12 22:43 ` Chaitanya Kulkarni 2021-05-12 22:43 ` Chaitanya Kulkarni 2021-05-12 13:15 ` [PATCH 10/15] block: use SLAB_TYPESAFE_BY_RCU for the bio slab Christoph Hellwig 2021-05-12 13:15 ` Christoph Hellwig 2021-05-12 13:15 ` [PATCH 11/15] block: define 'struct bvec_iter' as packed Christoph Hellwig 2021-05-12 13:15 ` Christoph Hellwig 2021-05-12 21:58 ` Sagi Grimberg 2021-05-12 21:58 ` Sagi Grimberg 2021-05-12 13:15 ` [PATCH 12/15] block: switch polling to be bio based Christoph Hellwig 2021-05-12 13:15 ` Christoph Hellwig 2021-05-12 22:03 ` Sagi Grimberg 2021-05-12 22:03 ` Sagi Grimberg 2021-05-12 22:12 ` Keith Busch 2021-05-12 22:12 ` Keith Busch 2021-05-14 2:50 ` Keith Busch 2021-05-14 2:50 ` Keith Busch 2021-05-14 16:26 ` Keith Busch 2021-05-14 16:26 ` Keith Busch 2021-05-14 16:28 ` Christoph Hellwig 2021-05-14 16:28 ` Christoph Hellwig 2021-05-13 1:26 ` Ming Lei 2021-05-13 1:26 ` Ming Lei 2021-05-13 1:23 ` Ming Lei 2021-05-13 1:23 ` Ming Lei 2021-05-12 13:15 ` [PATCH 13/15] block: don't allow writing to the poll queue attribute Christoph Hellwig 2021-05-12 13:15 ` Christoph Hellwig 2021-05-12 22:04 ` Sagi Grimberg 2021-05-12 22:04 ` Sagi Grimberg 2021-05-12 13:15 ` [PATCH 14/15] nvme-multipath: set QUEUE_FLAG_NOWAIT Christoph Hellwig 2021-05-12 13:15 ` Christoph Hellwig 2021-05-12 13:15 ` [PATCH 15/15] nvme-multipath: enable polled I/O Christoph Hellwig 2021-05-12 13:15 ` Christoph Hellwig 2021-05-12 22:10 ` Sagi Grimberg 2021-05-12 22:10 ` Sagi Grimberg 2021-05-12 22:16 ` Keith Busch 2021-05-12 22:16 ` Keith Busch 2021-05-13 23:33 ` switch block layer polling to a bio based model v3 Wunderlich, Mark 2021-05-13 23:33 ` Wunderlich, Mark -- strict thread matches above, loose matches on Subject: below -- 2021-04-27 16:16 switch block layer polling to a bio based model v2 Christoph Hellwig 2021-04-27 16:16 ` [PATCH 03/15] iomap: don't try to poll multi-bio I/Os in __iomap_dio_rw Christoph Hellwig
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210512213230.GB8543@magnolia \ --to=djwong@kernel.org \ --cc=Damien.LeMoal@wdc.com \ --cc=anil.vasudevan@intel.com \ --cc=axboe@kernel.dk \ --cc=hch@lst.de \ --cc=jefflexu@linux.alibaba.com \ --cc=kbusch@kernel.org \ --cc=linux-block@vger.kernel.org \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-nvme@lists.infradead.org \ --cc=mark.wunderlich@intel.com \ --cc=ming.lei@redhat.com \ --cc=sagi@grimberg.me \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.