From: Changheun Lee <nanich.lee@samsung.com>
To: hch@infradead.org
Cc: Johannes.Thumshirn@wdc.com, alex_y_xu@yahoo.ca,
asml.silence@gmail.com, axboe@kernel.dk, bgoncalv@redhat.com,
bvanassche@acm.org, damien.lemoal@wdc.com,
gregkh@linuxfoundation.org, jaegeuk@kernel.org,
jisoo2146.oh@samsung.com, junho89.kim@samsung.com,
linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
ming.lei@redhat.com, mj0123.lee@samsung.com,
nanich.lee@samsung.com, osandov@fb.com, patchwork-bot@kernel.org,
seunghwan.hyun@samsung.com, sookwan7.kim@samsung.com,
tj@kernel.org, tom.leiming@gmail.com, woosung2.lee@samsung.com,
yi.zhang@redhat.com, yt0928.kim@samsung.com
Subject: Re: [PATCH v10 0/1] bio: limit bio max size
Date: Wed, 26 May 2021 12:30:36 +0900 [thread overview]
Message-ID: <20210526033036.30257-1-nanich.lee@samsung.com> (raw)
In-Reply-To: <YKJBWClI7sUeABDs@infradead.org>
> On Fri, May 14, 2021 at 03:32:41PM +0900, Changheun Lee wrote:
> > I tested 512MB file read with direct I/O. and chunk size is 64MB.
> > - on SCSI disk, with no limit of bio max size(4GB) : avg. 630 MB/s
> > - on SCSI disk, with limit bio max size to 1MB : avg. 645 MB/s
> > - on ramdisk, with no limit of bio max size(4GB) : avg. 2749 MB/s
> > - on ramdisk, with limit bio max size to 1MB : avg. 3068 MB/s
> >
> > I set ramdisk environment as below.
> > - dd if=/dev/zero of=/mnt/ramdisk.img bs=$((1024*1024)) count=1024
> > - mkfs.ext4 /mnt/ramdisk.img
> > - mkdir /mnt/ext4ramdisk
> > - mount -o loop /mnt/ramdisk.img /mnt/ext4ramdisk
> >
> > With low performance disk, bio submit delay caused by large bio size is
> > not big protion. So it can't be feel easily. But it will be shown in high
> > performance disk.
>
> So let's attack the problem properly:
>
> 1) switch f2fs to a direct I/O implementation that does not suck
> 2) look into optimizing the iomap code to e.g. submit the bio once
> it is larger than queue_io_opt() without failing to add to a bio
> which would be annoying for things like huge pages.
There is bio submit delay in iomap_dio_bio_actor() too.
As bio size increases, bio_iov_iter_get_pages() in iomap_dio_bio_actor()
takes time more. I measured how much time is spent of bio_iov_iter_get_pages()
for each bio size with ftrace.
I added ftrace at below position.
--------------
static loff_t
iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
struct iomap_dio *dio, struct iomap *iomap)
{
... snip ...
nr_pages = bio_iov_vecs_to_alloc(dio->submit.iter, BIO_MAX_VECS);
do {
... snip ...
trace_mark_begin_end('B', "iomap_dio_bio_actor",
"bio_iov_iter_get_pages", "bi_size", bio->bi_iter.bi_size, 0);
ret = bio_iov_iter_get_pages(bio, dio->submit.iter);
trace_mark_begin_end('E', "iomap_dio_bio_actor",
"bio_iov_iter_get_pages", "bi_size", bio->bi_iter.bi_size, 0);
... snip ...
} while (nr_pages);
... snip ...
}
bio submit delay was 0.834ms for 32MB bio.
----------
4154.574861: mark_begin_end: B|11511|iomap_dio_bio_actor:bio_iov_iter_get_pages|bi_size=0;
4154.575317: mark_begin_end: E|11511|iomap_dio_bio_actor:bio_iov_iter_get_pages|bi_size=34181120;
4154.575695: block_bio_queue: 7,5 R 719672 + 66760 [tiotest]
bio submit delay was 0.027ms for 1MB bio.
----------
4868.617791: mark_begin_end: B|19510|iomap_dio_bio_actor:bio_iov_iter_get_pages|bi_size=0;
4868.617807: mark_begin_end: E|19510|iomap_dio_bio_actor:bio_iov_iter_get_pages|bi_size=1048576;
4868.617818: block_bio_queue: 7,5 R 1118208 + 2048 [tiotest]
To optimize this, current patch, or similar approch is needed in
bio_iov_iter_get_pages(). Is it OK making a new function to set bio max
size like as below? And call it where bio max size limitation is needed.
void blk_queue_bio_max_size(struct request_queue *q, unsigned int bytes)
{
struct queue_limits *limits = &q->limits;
unsigned int bio_max_size = round_up(bytes, PAGE_SIZE);
limits->bio_max_bytes = max_t(unsigned int, bio_max_size,
BIO_MAX_VECS * PAGE_SIZE);
}
EXPORT_SYMBOL(blk_queue_bio_max_size);
prev parent reply other threads:[~2021-05-26 3:49 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CGME20210513102018epcas1p2a69f8e50cdf8380e433aea1a9303d04c@epcas1p2.samsung.com>
2021-05-13 10:02 ` [PATCH v10 0/1] bio: limit bio max size Changheun Lee
[not found] ` <CGME20210513102019epcas1p35d6740527dacd9bff7d8b07316e4cbfd@epcas1p3.samsung.com>
2021-05-13 10:02 ` [PATCH v10 1/1] " Changheun Lee
2021-05-13 11:23 ` [PATCH v10 0/1] " Christoph Hellwig
[not found] ` <CGME20210514065054epcas1p4bd5c92a59d4010da4447ef62f65fdd4b@epcas1p4.samsung.com>
2021-05-14 6:32 ` Changheun Lee
2021-05-17 10:11 ` Christoph Hellwig
[not found] ` <CGME20210526034859epcas1p3b3ab803406c983df431f89b6f9097f08@epcas1p3.samsung.com>
2021-05-26 3:30 ` Changheun Lee [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210526033036.30257-1-nanich.lee@samsung.com \
--to=nanich.lee@samsung.com \
--cc=Johannes.Thumshirn@wdc.com \
--cc=alex_y_xu@yahoo.ca \
--cc=asml.silence@gmail.com \
--cc=axboe@kernel.dk \
--cc=bgoncalv@redhat.com \
--cc=bvanassche@acm.org \
--cc=damien.lemoal@wdc.com \
--cc=gregkh@linuxfoundation.org \
--cc=hch@infradead.org \
--cc=jaegeuk@kernel.org \
--cc=jisoo2146.oh@samsung.com \
--cc=junho89.kim@samsung.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=mj0123.lee@samsung.com \
--cc=osandov@fb.com \
--cc=patchwork-bot@kernel.org \
--cc=seunghwan.hyun@samsung.com \
--cc=sookwan7.kim@samsung.com \
--cc=tj@kernel.org \
--cc=tom.leiming@gmail.com \
--cc=woosung2.lee@samsung.com \
--cc=yi.zhang@redhat.com \
--cc=yt0928.kim@samsung.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).