linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 1/2] bio: limit bio max size
       [not found] <CGME20210129040447epcas1p4531f0bf1ddebf0b469af87e85199cc43@epcas1p4.samsung.com>
@ 2021-01-29  3:49 ` Changheun Lee
       [not found]   ` <CGME20210129040448epcas1p44da94c82ac4fb82d77c452bda56d6b09@epcas1p4.samsung.com>
                     ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Changheun Lee @ 2021-01-29  3:49 UTC (permalink / raw)
  To: hch, Johannes.Thumshirn, asml.silence, axboe, damien.lemoal,
	linux-block, linux-kernel, ming.lei, osandov, patchwork-bot, tj,
	tom.leiming
  Cc: jisoo2146.oh, junho89.kim, mj0123.lee, seunghwan.hyun,
	sookwan7.kim, woosung2.lee, yt0928.kim, Changheun Lee

bio size can grow up to 4GB when muli-page bvec is enabled.
but sometimes it would lead to inefficient behaviors.
in case of large chunk direct I/O, - 32MB chunk read in user space -
all pages for 32MB would be merged to a bio structure if the pages
physical addresses are contiguous. it makes some delay to submit
until merge complete. bio max size should be limited to a proper size.

When 32MB chunk read with direct I/O option is coming from userspace,
kernel behavior is below now. it's timeline.

 | bio merge for 32MB. total 8,192 pages are merged.
 | total elapsed time is over 2ms.
 |------------------ ... ----------------------->|
                                                 | 8,192 pages merged a bio.
                                                 | at this time, first bio submit is done.
                                                 | 1 bio is split to 32 read request and issue.
                                                 |--------------->
                                                  |--------------->
                                                   |--------------->
                                                              ......
                                                                   |--------------->
                                                                    |--------------->|
                          total 19ms elapsed to complete 32MB read done from device. |

If bio max size is limited with 1MB, behavior is changed below.

 | bio merge for 1MB. 256 pages are merged for each bio.
 | total 32 bio will be made.
 | total elapsed time is over 2ms. it's same.
 | but, first bio submit timing is fast. about 100us.
 |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
      | 256 pages merged a bio.
      | at this time, first bio submit is done.
      | and 1 read request is issued for 1 bio.
      |--------------->
           |--------------->
                |--------------->
                                      ......
                                                 |--------------->
                                                  |--------------->|
        total 17ms elapsed to complete 32MB read done from device. |

As a result, read request issue timing is faster if bio max size is limited.
Current kernel behavior with multipage bvec, super large bio can be created.
And it lead to delay first I/O request issue.

Signed-off-by: Changheun Lee <nanich.lee@samsung.com>
---
 block/bio.c            | 13 ++++++++++++-
 include/linux/bio.h    |  2 +-
 include/linux/blkdev.h |  3 +++
 3 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 1f2cc1fbe283..c528e1f944c7 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -287,6 +287,17 @@ void bio_init(struct bio *bio, struct bio_vec *table,
 }
 EXPORT_SYMBOL(bio_init);
 
+unsigned int bio_max_size(struct bio *bio)
+{
+	struct request_queue *q = bio->bi_disk->queue;
+
+	if (blk_queue_limit_bio_size(q))
+		return blk_queue_get_max_sectors(q, bio_op(bio))
+			<< SECTOR_SHIFT;
+
+	return UINT_MAX;
+}
+
 /**
  * bio_reset - reinitialize a bio
  * @bio:	bio to reset
@@ -877,7 +888,7 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
 		struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
 
 		if (page_is_mergeable(bv, page, len, off, same_page)) {
-			if (bio->bi_iter.bi_size > UINT_MAX - len) {
+			if (bio->bi_iter.bi_size > bio_max_size(bio) - len) {
 				*same_page = false;
 				return false;
 			}
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 1edda614f7ce..13b6f6562a5b 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -113,7 +113,7 @@ static inline bool bio_full(struct bio *bio, unsigned len)
 	if (bio->bi_vcnt >= bio->bi_max_vecs)
 		return true;
 
-	if (bio->bi_iter.bi_size > UINT_MAX - len)
+	if (bio->bi_iter.bi_size > bio_max_size(bio) - len)
 		return true;
 
 	return false;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index f94ee3089e01..3aeab9e7e97b 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -621,6 +621,7 @@ struct request_queue {
 #define QUEUE_FLAG_RQ_ALLOC_TIME 27	/* record rq->alloc_time_ns */
 #define QUEUE_FLAG_HCTX_ACTIVE	28	/* at least one blk-mq hctx is active */
 #define QUEUE_FLAG_NOWAIT       29	/* device supports NOWAIT */
+#define QUEUE_FLAG_LIMIT_BIO_SIZE 30	/* limit bio size */
 
 #define QUEUE_FLAG_MQ_DEFAULT	((1 << QUEUE_FLAG_IO_STAT) |		\
 				 (1 << QUEUE_FLAG_SAME_COMP) |		\
@@ -667,6 +668,8 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 #define blk_queue_fua(q)	test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags)
 #define blk_queue_registered(q)	test_bit(QUEUE_FLAG_REGISTERED, &(q)->queue_flags)
 #define blk_queue_nowait(q)	test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags)
+#define blk_queue_limit_bio_size(q)	\
+	test_bit(QUEUE_FLAG_LIMIT_BIO_SIZE, &(q)->queue_flags)
 
 extern void blk_set_pm_only(struct request_queue *q);
 extern void blk_clear_pm_only(struct request_queue *q);
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v4 2/2] bio: add limit_bio_size sysfs
       [not found]   ` <CGME20210129040448epcas1p44da94c82ac4fb82d77c452bda56d6b09@epcas1p4.samsung.com>
@ 2021-01-29  3:49     ` Changheun Lee
  2021-02-03  9:00       ` Greg KH
  0 siblings, 1 reply; 13+ messages in thread
From: Changheun Lee @ 2021-01-29  3:49 UTC (permalink / raw)
  To: hch, Johannes.Thumshirn, asml.silence, axboe, damien.lemoal,
	linux-block, linux-kernel, ming.lei, osandov, patchwork-bot, tj,
	tom.leiming
  Cc: jisoo2146.oh, junho89.kim, mj0123.lee, seunghwan.hyun,
	sookwan7.kim, woosung2.lee, yt0928.kim, Changheun Lee

Add new sysfs node to limit bio size.

Signed-off-by: Changheun Lee <nanich.lee@samsung.com>
---
 block/blk-sysfs.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index b513f1683af0..840d97f427e6 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -288,6 +288,7 @@ QUEUE_SYSFS_BIT_FNS(nonrot, NONROT, 1);
 QUEUE_SYSFS_BIT_FNS(random, ADD_RANDOM, 0);
 QUEUE_SYSFS_BIT_FNS(iostats, IO_STAT, 0);
 QUEUE_SYSFS_BIT_FNS(stable_writes, STABLE_WRITES, 0);
+QUEUE_SYSFS_BIT_FNS(limit_bio_size, LIMIT_BIO_SIZE, 0);
 #undef QUEUE_SYSFS_BIT_FNS
 
 static ssize_t queue_zoned_show(struct request_queue *q, char *page)
@@ -615,6 +616,7 @@ QUEUE_RW_ENTRY(queue_nonrot, "rotational");
 QUEUE_RW_ENTRY(queue_iostats, "iostats");
 QUEUE_RW_ENTRY(queue_random, "add_random");
 QUEUE_RW_ENTRY(queue_stable_writes, "stable_writes");
+QUEUE_RW_ENTRY(queue_limit_bio_size, "limit_bio_size");
 
 static struct attribute *queue_attrs[] = {
 	&queue_requests_entry.attr,
@@ -648,6 +650,7 @@ static struct attribute *queue_attrs[] = {
 	&queue_rq_affinity_entry.attr,
 	&queue_iostats_entry.attr,
 	&queue_stable_writes_entry.attr,
+	&queue_limit_bio_size_entry.attr,
 	&queue_random_entry.attr,
 	&queue_poll_entry.attr,
 	&queue_wc_entry.attr,
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 1/2] bio: limit bio max size
  2021-01-29  3:49 ` [PATCH v4 1/2] bio: limit bio max size Changheun Lee
       [not found]   ` <CGME20210129040448epcas1p44da94c82ac4fb82d77c452bda56d6b09@epcas1p4.samsung.com>
@ 2021-01-29  7:23   ` Ming Lei
       [not found]   ` <CGME20210201030830epcas1p402e8a088fb16af9fbbb130b152e097f1@epcas1p4.samsung.com>
  2 siblings, 0 replies; 13+ messages in thread
From: Ming Lei @ 2021-01-29  7:23 UTC (permalink / raw)
  To: Changheun Lee
  Cc: hch, Johannes.Thumshirn, asml.silence, axboe, damien.lemoal,
	linux-block, linux-kernel, osandov, patchwork-bot, tj,
	tom.leiming, jisoo2146.oh, junho89.kim, mj0123.lee,
	seunghwan.hyun, sookwan7.kim, woosung2.lee, yt0928.kim

On Fri, Jan 29, 2021 at 12:49:08PM +0900, Changheun Lee wrote:
> bio size can grow up to 4GB when muli-page bvec is enabled.
> but sometimes it would lead to inefficient behaviors.
> in case of large chunk direct I/O, - 32MB chunk read in user space -
> all pages for 32MB would be merged to a bio structure if the pages
> physical addresses are contiguous. it makes some delay to submit
> until merge complete. bio max size should be limited to a proper size.
> 
> When 32MB chunk read with direct I/O option is coming from userspace,
> kernel behavior is below now. it's timeline.
> 
>  | bio merge for 32MB. total 8,192 pages are merged.
>  | total elapsed time is over 2ms.
>  |------------------ ... ----------------------->|
>                                                  | 8,192 pages merged a bio.
>                                                  | at this time, first bio submit is done.
>                                                  | 1 bio is split to 32 read request and issue.
>                                                  |--------------->
>                                                   |--------------->
>                                                    |--------------->
>                                                               ......
>                                                                    |--------------->
>                                                                     |--------------->|
>                           total 19ms elapsed to complete 32MB read done from device. |
> 
> If bio max size is limited with 1MB, behavior is changed below.
> 
>  | bio merge for 1MB. 256 pages are merged for each bio.
>  | total 32 bio will be made.
>  | total elapsed time is over 2ms. it's same.
>  | but, first bio submit timing is fast. about 100us.
>  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
>       | 256 pages merged a bio.
>       | at this time, first bio submit is done.
>       | and 1 read request is issued for 1 bio.
>       |--------------->
>            |--------------->
>                 |--------------->
>                                       ......
>                                                  |--------------->
>                                                   |--------------->|
>         total 17ms elapsed to complete 32MB read done from device. |

Can you share us if enabling THP in your application can avoid this issue? BTW, you
need to make the 32MB buffer aligned with huge page size. IMO, THP perfectly fits
your case.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 1/2] bio: limit bio max size
       [not found]   ` <CGME20210201030830epcas1p402e8a088fb16af9fbbb130b152e097f1@epcas1p4.samsung.com>
@ 2021-02-01  2:52     ` Changheun Lee
  2021-02-01  7:14       ` Ming Lei
  0 siblings, 1 reply; 13+ messages in thread
From: Changheun Lee @ 2021-02-01  2:52 UTC (permalink / raw)
  To: nanich.lee
  Cc: Johannes.Thumshirn, asml.silence, axboe, damien.lemoal, hch,
	jisoo2146.oh, junho89.kim, linux-block, linux-kernel, ming.lei,
	mj0123.lee, osandov, patchwork-bot, seunghwan.hyun, sookwan7.kim,
	tj, tom.leiming, woosung2.lee, yt0928.kim

> On Fri, Jan 29, 2021 at 12:49:08PM +0900, Changheun Lee wrote:
> > bio size can grow up to 4GB when muli-page bvec is enabled.
> > but sometimes it would lead to inefficient behaviors.
> > in case of large chunk direct I/O, - 32MB chunk read in user space -
> > all pages for 32MB would be merged to a bio structure if the pages
> > physical addresses are contiguous. it makes some delay to submit
> > until merge complete. bio max size should be limited to a proper size.
> > 
> > When 32MB chunk read with direct I/O option is coming from userspace,
> > kernel behavior is below now. it's timeline.
> > 
> >  | bio merge for 32MB. total 8,192 pages are merged.
> >  | total elapsed time is over 2ms.
> >  |------------------ ... ----------------------->|
> >                                                  | 8,192 pages merged a bio.
> >                                                  | at this time, first bio submit is done.
> >                                                  | 1 bio is split to 32 read request and issue.
> >                                                  |--------------->
> >                                                   |--------------->
> >                                                    |--------------->
> >                                                               ......
> >                                                                    |--------------->
> >                                                                     |--------------->|
> >                           total 19ms elapsed to complete 32MB read done from device. |
> > 
> > If bio max size is limited with 1MB, behavior is changed below.
> > 
> >  | bio merge for 1MB. 256 pages are merged for each bio.
> >  | total 32 bio will be made.
> >  | total elapsed time is over 2ms. it's same.
> >  | but, first bio submit timing is fast. about 100us.
> >  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
> >       | 256 pages merged a bio.
> >       | at this time, first bio submit is done.
> >       | and 1 read request is issued for 1 bio.
> >       |--------------->
> >            |--------------->
> >                 |--------------->
> >                                       ......
> >                                                  |--------------->
> >                                                   |--------------->|
> >         total 17ms elapsed to complete 32MB read done from device. |
> 
> Can you share us if enabling THP in your application can avoid this issue? BTW, you
> need to make the 32MB buffer aligned with huge page size. IMO, THP perfectly fits
> your case.
> 

THP is enabled already like as below in my environment. It has no effect.

cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never

This issue was reported from performance benchmark application in open market.
I can't control application's working in open market.
It's not only my own case. This issue might be occured in many mobile environment.
At least, I checked this problem in exynos, and qualcomm chipset.

> 
> Thanks,
> Ming
> 
> 

---
Changheun Lee
Samsung Electronics.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 1/2] bio: limit bio max size
  2021-02-01  2:52     ` Changheun Lee
@ 2021-02-01  7:14       ` Ming Lei
       [not found]         ` <CGME20210202042747epcas1p2054df120098b3130cb104cf8e4731797@epcas1p2.samsung.com>
  0 siblings, 1 reply; 13+ messages in thread
From: Ming Lei @ 2021-02-01  7:14 UTC (permalink / raw)
  To: Changheun Lee
  Cc: Johannes.Thumshirn, asml.silence, axboe, damien.lemoal, hch,
	jisoo2146.oh, junho89.kim, linux-block, linux-kernel, mj0123.lee,
	osandov, patchwork-bot, seunghwan.hyun, sookwan7.kim, tj,
	tom.leiming, woosung2.lee, yt0928.kim

On Mon, Feb 01, 2021 at 11:52:48AM +0900, Changheun Lee wrote:
> > On Fri, Jan 29, 2021 at 12:49:08PM +0900, Changheun Lee wrote:
> > > bio size can grow up to 4GB when muli-page bvec is enabled.
> > > but sometimes it would lead to inefficient behaviors.
> > > in case of large chunk direct I/O, - 32MB chunk read in user space -
> > > all pages for 32MB would be merged to a bio structure if the pages
> > > physical addresses are contiguous. it makes some delay to submit
> > > until merge complete. bio max size should be limited to a proper size.
> > > 
> > > When 32MB chunk read with direct I/O option is coming from userspace,
> > > kernel behavior is below now. it's timeline.
> > > 
> > >  | bio merge for 32MB. total 8,192 pages are merged.
> > >  | total elapsed time is over 2ms.
> > >  |------------------ ... ----------------------->|
> > >                                                  | 8,192 pages merged a bio.
> > >                                                  | at this time, first bio submit is done.
> > >                                                  | 1 bio is split to 32 read request and issue.
> > >                                                  |--------------->
> > >                                                   |--------------->
> > >                                                    |--------------->
> > >                                                               ......
> > >                                                                    |--------------->
> > >                                                                     |--------------->|
> > >                           total 19ms elapsed to complete 32MB read done from device. |
> > > 
> > > If bio max size is limited with 1MB, behavior is changed below.
> > > 
bio_iov_iter_get_pages> > >  | bio merge for 1MB. 256 pages are merged for each bio.
> > >  | total 32 bio will be made.
> > >  | total elapsed time is over 2ms. it's same.
> > >  | but, first bio submit timing is fast. about 100us.
> > >  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
> > >       | 256 pages merged a bio.
> > >       | at this time, first bio submit is done.
> > >       | and 1 read request is issued for 1 bio.
> > >       |--------------->
> > >            |--------------->
> > >                 |--------------->
> > >                                       ......
> > >                                                  |--------------->
> > >                                                   |--------------->|
> > >         total 17ms elapsed to complete 32MB read done from device. |
> > 
> > Can you share us if enabling THP in your application can avoid this issue? BTW, you
> > need to make the 32MB buffer aligned with huge page size. IMO, THP perfectly fits
> > your case.
> > 
> 
> THP is enabled already like as below in my environment. It has no effect.
> 
> cat /sys/kernel/mm/transparent_hugepage/enabled
> [always] madvise never

The 32MB user buffer needs to be huge page size aligned. If your system
supports bcc/bpftrace, it is quite easy to check if the buffer is
aligned.

> 
> This issue was reported from performance benchmark application in open market.
> I can't control application's working in open market.
> It's not only my own case. This issue might be occured in many mobile environment.
> At least, I checked this problem in exynos, and qualcomm chipset.

You just said it takes 2ms for building 32MB bio, but you never investigate the
reason. I guess it is from get_user_pages_fast(), but maybe others. Can you dig
further for the reason? Maybe it is one arm64 specific issue.

BTW, bio_iov_iter_get_pages() just takes ~200us on one x86_64 VM with THP, which is
observed via bcc/funclatency when running the following workload:

[root@ktest-01 test]# cat fio.job
[global]
bs=32768k
rw=randread
iodepth=1
ioengine=psync
direct=1
runtime=20
time_based

group_reporting=0
ramp_time=5

[diotest]
filename=/dev/sde


[root@ktest-01 func]# /usr/share/bcc/tools/funclatency bio_iov_iter_get_pages
Tracing 1 functions for "bio_iov_iter_get_pages"... Hit Ctrl-C to end.
^C
     nsecs               : count     distribution
         0 -> 1          : 0        |                                        |
         2 -> 3          : 0        |                                        |
         4 -> 7          : 0        |                                        |
         8 -> 15         : 0        |                                        |
        16 -> 31         : 0        |                                        |
        32 -> 63         : 0        |                                        |
        64 -> 127        : 0        |                                        |
       128 -> 255        : 0        |                                        |
       256 -> 511        : 0        |                                        |
       512 -> 1023       : 0        |                                        |
      1024 -> 2047       : 0        |                                        |
      2048 -> 4095       : 0        |                                        |
      4096 -> 8191       : 0        |                                        |
      8192 -> 16383      : 0        |                                        |
     16384 -> 32767      : 0        |                                        |
     32768 -> 65535      : 0        |                                        |
     65536 -> 131071     : 0        |                                        |
    131072 -> 262143     : 1842     |****************************************|
    262144 -> 524287     : 125      |**                                      |
    524288 -> 1048575    : 6        |                                        |
   1048576 -> 2097151    : 0        |                                        |
   2097152 -> 4194303    : 1        |                                        |
   4194304 -> 8388607    : 0        |                                        |
   8388608 -> 16777215   : 1        |                                        |
Detaching...



-- 
Ming


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 1/2] bio: limit bio max size
       [not found]         ` <CGME20210202042747epcas1p2054df120098b3130cb104cf8e4731797@epcas1p2.samsung.com>
@ 2021-02-02  4:12           ` Changheun Lee
  2021-02-03  3:40             ` Ming Lei
  0 siblings, 1 reply; 13+ messages in thread
From: Changheun Lee @ 2021-02-02  4:12 UTC (permalink / raw)
  To: ming.lei
  Cc: Johannes.Thumshirn, asml.silence, axboe, damien.lemoal, hch,
	jisoo2146.oh, junho89.kim, linux-block, linux-kernel, mj0123.lee,
	nanich.lee, osandov, patchwork-bot, seunghwan.hyun, sookwan7.kim,
	tj, tom.leiming, woosung2.lee, yt0928.kim

> On Mon, Feb 01, 2021 at 11:52:48AM +0900, Changheun Lee wrote:
> > > On Fri, Jan 29, 2021 at 12:49:08PM +0900, Changheun Lee wrote:
> > > > bio size can grow up to 4GB when muli-page bvec is enabled.
> > > > but sometimes it would lead to inefficient behaviors.
> > > > in case of large chunk direct I/O, - 32MB chunk read in user space -
> > > > all pages for 32MB would be merged to a bio structure if the pages
> > > > physical addresses are contiguous. it makes some delay to submit
> > > > until merge complete. bio max size should be limited to a proper size.
> > > > 
> > > > When 32MB chunk read with direct I/O option is coming from userspace,
> > > > kernel behavior is below now. it's timeline.
> > > > 
> > > >  | bio merge for 32MB. total 8,192 pages are merged.
> > > >  | total elapsed time is over 2ms.
> > > >  |------------------ ... ----------------------->|
> > > >                                                  | 8,192 pages merged a bio.
> > > >                                                  | at this time, first bio submit is done.
> > > >                                                  | 1 bio is split to 32 read request and issue.
> > > >                                                  |--------------->
> > > >                                                   |--------------->
> > > >                                                    |--------------->
> > > >                                                               ......
> > > >                                                                    |--------------->
> > > >                                                                     |--------------->|
> > > >                           total 19ms elapsed to complete 32MB read done from device. |
> > > > 
> > > > If bio max size is limited with 1MB, behavior is changed below.
> > > > 
> bio_iov_iter_get_pages> > >  | bio merge for 1MB. 256 pages are merged for each bio.
> > > >  | total 32 bio will be made.
> > > >  | total elapsed time is over 2ms. it's same.
> > > >  | but, first bio submit timing is fast. about 100us.
> > > >  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
> > > >       | 256 pages merged a bio.
> > > >       | at this time, first bio submit is done.
> > > >       | and 1 read request is issued for 1 bio.
> > > >       |--------------->
> > > >            |--------------->
> > > >                 |--------------->
> > > >                                       ......
> > > >                                                  |--------------->
> > > >                                                   |--------------->|
> > > >         total 17ms elapsed to complete 32MB read done from device. |
> > > 
> > > Can you share us if enabling THP in your application can avoid this issue? BTW, you
> > > need to make the 32MB buffer aligned with huge page size. IMO, THP perfectly fits
> > > your case.
> > > 
> > 
> > THP is enabled already like as below in my environment. It has no effect.
> > 
> > cat /sys/kernel/mm/transparent_hugepage/enabled
> > [always] madvise never
> 
> The 32MB user buffer needs to be huge page size aligned. If your system
> supports bcc/bpftrace, it is quite easy to check if the buffer is
> aligned.
> 
> > 
> > This issue was reported from performance benchmark application in open market.
> > I can't control application's working in open market.
> > It's not only my own case. This issue might be occured in many mobile environment.
> > At least, I checked this problem in exynos, and qualcomm chipset.
> 
> You just said it takes 2ms for building 32MB bio, but you never investigate the
> reason. I guess it is from get_user_pages_fast(), but maybe others. Can you dig
> further for the reason? Maybe it is one arm64 specific issue.
> 
> BTW, bio_iov_iter_get_pages() just takes ~200us on one x86_64 VM with THP, which is
> observed via bcc/funclatency when running the following workload:
> 

I think you focused on bio_iov_iter_get_pages() because I just commented page
merge delay only. Sorry about that. I missed details of this issue.
Actually there are many operations during while-loop in do_direct_IO().
Page merge operation is just one among them. Page merge operation is called
by dio_send_cur_page() in while-loop. Below is call stack.

__bio_try_merge_page+0x4c/0x614
bio_add_page+0x40/0x12c
dio_send_cur_page+0x13c/0x374
submit_page_section+0xb4/0x304
do_direct_IO+0x3d4/0x854
do_blockdev_direct_IO+0x488/0xa18
__blockdev_direct_IO+0x30/0x3c
f2fs_direct_IO+0x6d0/0xb80
generic_file_read_iter+0x284/0x45c
f2fs_file_read_iter+0x3c/0xac
__vfs_read+0x19c/0x204
vfs_read+0xa4/0x144

2ms delay is not only caused by page merge operation. it inculdes many the
other operations too. But those many operations included page merge should
be executed more if bio size is grow up.

> [root@ktest-01 test]# cat fio.job
> [global]
> bs=32768k
> rw=randread
> iodepth=1
> ioengine=psync
> direct=1
> runtime=20
> time_based
> 
> group_reporting=0
> ramp_time=5
> 
> [diotest]
> filename=/dev/sde
> 
> 
> [root@ktest-01 func]# /usr/share/bcc/tools/funclatency bio_iov_iter_get_pages
> Tracing 1 functions for "bio_iov_iter_get_pages"... Hit Ctrl-C to end.
> ^C
> nsecs               : count     distribution
> 0 -> 1          : 0        |                                        |
> 2 -> 3          : 0        |                                        |
> 4 -> 7          : 0        |                                        |
> 8 -> 15         : 0        |                                        |
> 16 -> 31         : 0        |                                        |
> 32 -> 63         : 0        |                                        |
> 64 -> 127        : 0        |                                        |
> 128 -> 255        : 0        |                                        |
> 256 -> 511        : 0        |                                        |
> 512 -> 1023       : 0        |                                        |
> 1024 -> 2047       : 0        |                                        |
> 2048 -> 4095       : 0        |                                        |
> 4096 -> 8191       : 0        |                                        |
> 8192 -> 16383      : 0        |                                        |
> 16384 -> 32767      : 0        |                                        |
> 32768 -> 65535      : 0        |                                        |
> 65536 -> 131071     : 0        |                                        |
> 131072 -> 262143     : 1842     |****************************************|
> 262144 -> 524287     : 125      |**                                      |
> 524288 -> 1048575    : 6        |                                        |
> 1048576 -> 2097151    : 0        |                                        |
> 2097152 -> 4194303    : 1        |                                        |
> 4194304 -> 8388607    : 0        |                                        |
> 8388608 -> 16777215   : 1        |                                        |
> Detaching...
> 
> 
> 
> -- 
> Ming
> 

---
Changheun Lee
Samsung Electronics

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 1/2] bio: limit bio max size
  2021-02-02  4:12           ` Changheun Lee
@ 2021-02-03  3:40             ` Ming Lei
       [not found]               ` <CGME20210203054543epcas1p454a7520710a1ae066237375691db90fd@epcas1p4.samsung.com>
  0 siblings, 1 reply; 13+ messages in thread
From: Ming Lei @ 2021-02-03  3:40 UTC (permalink / raw)
  To: Changheun Lee
  Cc: Johannes.Thumshirn, asml.silence, axboe, damien.lemoal, hch,
	jisoo2146.oh, junho89.kim, linux-block, linux-kernel, mj0123.lee,
	osandov, patchwork-bot, seunghwan.hyun, sookwan7.kim, tj,
	tom.leiming, woosung2.lee, yt0928.kim

On Tue, Feb 02, 2021 at 01:12:04PM +0900, Changheun Lee wrote:
> > On Mon, Feb 01, 2021 at 11:52:48AM +0900, Changheun Lee wrote:
> > > > On Fri, Jan 29, 2021 at 12:49:08PM +0900, Changheun Lee wrote:
> > > > > bio size can grow up to 4GB when muli-page bvec is enabled.
> > > > > but sometimes it would lead to inefficient behaviors.
> > > > > in case of large chunk direct I/O, - 32MB chunk read in user space -
> > > > > all pages for 32MB would be merged to a bio structure if the pages
> > > > > physical addresses are contiguous. it makes some delay to submit
> > > > > until merge complete. bio max size should be limited to a proper size.
> > > > > 
> > > > > When 32MB chunk read with direct I/O option is coming from userspace,
> > > > > kernel behavior is below now. it's timeline.
> > > > > 
> > > > >  | bio merge for 32MB. total 8,192 pages are merged.
> > > > >  | total elapsed time is over 2ms.
> > > > >  |------------------ ... ----------------------->|
> > > > >                                                  | 8,192 pages merged a bio.
> > > > >                                                  | at this time, first bio submit is done.
> > > > >                                                  | 1 bio is split to 32 read request and issue.
> > > > >                                                  |--------------->
> > > > >                                                   |--------------->
> > > > >                                                    |--------------->
> > > > >                                                               ......
> > > > >                                                                    |--------------->
> > > > >                                                                     |--------------->|
> > > > >                           total 19ms elapsed to complete 32MB read done from device. |
> > > > > 
> > > > > If bio max size is limited with 1MB, behavior is changed below.
> > > > > 
> > bio_iov_iter_get_pages> > >  | bio merge for 1MB. 256 pages are merged for each bio.
> > > > >  | total 32 bio will be made.
> > > > >  | total elapsed time is over 2ms. it's same.
> > > > >  | but, first bio submit timing is fast. about 100us.
> > > > >  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
> > > > >       | 256 pages merged a bio.
> > > > >       | at this time, first bio submit is done.
> > > > >       | and 1 read request is issued for 1 bio.
> > > > >       |--------------->
> > > > >            |--------------->
> > > > >                 |--------------->
> > > > >                                       ......
> > > > >                                                  |--------------->
> > > > >                                                   |--------------->|
> > > > >         total 17ms elapsed to complete 32MB read done from device. |
> > > > 
> > > > Can you share us if enabling THP in your application can avoid this issue? BTW, you
> > > > need to make the 32MB buffer aligned with huge page size. IMO, THP perfectly fits
> > > > your case.
> > > > 
> > > 
> > > THP is enabled already like as below in my environment. It has no effect.
> > > 
> > > cat /sys/kernel/mm/transparent_hugepage/enabled
> > > [always] madvise never
> > 
> > The 32MB user buffer needs to be huge page size aligned. If your system
> > supports bcc/bpftrace, it is quite easy to check if the buffer is
> > aligned.
> > 
> > > 
> > > This issue was reported from performance benchmark application in open market.
> > > I can't control application's working in open market.
> > > It's not only my own case. This issue might be occured in many mobile environment.
> > > At least, I checked this problem in exynos, and qualcomm chipset.
> > 
> > You just said it takes 2ms for building 32MB bio, but you never investigate the
> > reason. I guess it is from get_user_pages_fast(), but maybe others. Can you dig
> > further for the reason? Maybe it is one arm64 specific issue.
> > 
> > BTW, bio_iov_iter_get_pages() just takes ~200us on one x86_64 VM with THP, which is
> > observed via bcc/funclatency when running the following workload:
> > 
> 
> I think you focused on bio_iov_iter_get_pages() because I just commented page
> merge delay only. Sorry about that. I missed details of this issue.
> Actually there are many operations during while-loop in do_direct_IO().
> Page merge operation is just one among them. Page merge operation is called
> by dio_send_cur_page() in while-loop. Below is call stack.
> 
> __bio_try_merge_page+0x4c/0x614
> bio_add_page+0x40/0x12c
> dio_send_cur_page+0x13c/0x374
> submit_page_section+0xb4/0x304
> do_direct_IO+0x3d4/0x854
> do_blockdev_direct_IO+0x488/0xa18
> __blockdev_direct_IO+0x30/0x3c
> f2fs_direct_IO+0x6d0/0xb80
> generic_file_read_iter+0x284/0x45c
> f2fs_file_read_iter+0x3c/0xac
> __vfs_read+0x19c/0x204
> vfs_read+0xa4/0x144
> 
> 2ms delay is not only caused by page merge operation. it inculdes many the
> other operations too. But those many operations included page merge should
> be executed more if bio size is grow up.

OK, got it.

Then I think you can just limit bio size in dio_bio_add_page() instead of
doing it for all.

-- 
Ming


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 1/2] bio: limit bio max size
       [not found]               ` <CGME20210203054543epcas1p454a7520710a1ae066237375691db90fd@epcas1p4.samsung.com>
@ 2021-02-03  5:30                 ` Changheun Lee
  0 siblings, 0 replies; 13+ messages in thread
From: Changheun Lee @ 2021-02-03  5:30 UTC (permalink / raw)
  To: ming.lei
  Cc: Johannes.Thumshirn, asml.silence, axboe, damien.lemoal, hch,
	jisoo2146.oh, junho89.kim, linux-block, linux-kernel, mj0123.lee,
	nanich.lee, osandov, patchwork-bot, seunghwan.hyun, sookwan7.kim,
	tj, tom.leiming, woosung2.lee, yt0928.kim

> On Tue, Feb 02, 2021 at 01:12:04PM +0900, Changheun Lee wrote:
> > > On Mon, Feb 01, 2021 at 11:52:48AM +0900, Changheun Lee wrote:
> > > > > On Fri, Jan 29, 2021 at 12:49:08PM +0900, Changheun Lee wrote:
> > > > > > bio size can grow up to 4GB when muli-page bvec is enabled.
> > > > > > but sometimes it would lead to inefficient behaviors.
> > > > > > in case of large chunk direct I/O, - 32MB chunk read in user space -
> > > > > > all pages for 32MB would be merged to a bio structure if the pages
> > > > > > physical addresses are contiguous. it makes some delay to submit
> > > > > > until merge complete. bio max size should be limited to a proper size.
> > > > > > 
> > > > > > When 32MB chunk read with direct I/O option is coming from userspace,
> > > > > > kernel behavior is below now. it's timeline.
> > > > > > 
> > > > > >  | bio merge for 32MB. total 8,192 pages are merged.
> > > > > >  | total elapsed time is over 2ms.
> > > > > >  |------------------ ... ----------------------->|
> > > > > >                                                  | 8,192 pages merged a bio.
> > > > > >                                                  | at this time, first bio submit is done.
> > > > > >                                                  | 1 bio is split to 32 read request and issue.
> > > > > >                                                  |--------------->
> > > > > >                                                   |--------------->
> > > > > >                                                    |--------------->
> > > > > >                                                               ......
> > > > > >                                                                    |--------------->
> > > > > >                                                                     |--------------->|
> > > > > >                           total 19ms elapsed to complete 32MB read done from device. |
> > > > > > 
> > > > > > If bio max size is limited with 1MB, behavior is changed below.
> > > > > > 
> > > bio_iov_iter_get_pages> > >  | bio merge for 1MB. 256 pages are merged for each bio.
> > > > > >  | total 32 bio will be made.
> > > > > >  | total elapsed time is over 2ms. it's same.
> > > > > >  | but, first bio submit timing is fast. about 100us.
> > > > > >  |--->|--->|--->|---> ... -->|--->|--->|--->|--->|
> > > > > >       | 256 pages merged a bio.
> > > > > >       | at this time, first bio submit is done.
> > > > > >       | and 1 read request is issued for 1 bio.
> > > > > >       |--------------->
> > > > > >            |--------------->
> > > > > >                 |--------------->
> > > > > >                                       ......
> > > > > >                                                  |--------------->
> > > > > >                                                   |--------------->|
> > > > > >         total 17ms elapsed to complete 32MB read done from device. |
> > > > > 
> > > > > Can you share us if enabling THP in your application can avoid this issue? BTW, you
> > > > > need to make the 32MB buffer aligned with huge page size. IMO, THP perfectly fits
> > > > > your case.
> > > > > 
> > > > 
> > > > THP is enabled already like as below in my environment. It has no effect.
> > > > 
> > > > cat /sys/kernel/mm/transparent_hugepage/enabled
> > > > [always] madvise never
> > > 
> > > The 32MB user buffer needs to be huge page size aligned. If your system
> > > supports bcc/bpftrace, it is quite easy to check if the buffer is
> > > aligned.
> > > 
> > > > 
> > > > This issue was reported from performance benchmark application in open market.
> > > > I can't control application's working in open market.
> > > > It's not only my own case. This issue might be occured in many mobile environment.
> > > > At least, I checked this problem in exynos, and qualcomm chipset.
> > > 
> > > You just said it takes 2ms for building 32MB bio, but you never investigate the
> > > reason. I guess it is from get_user_pages_fast(), but maybe others. Can you dig
> > > further for the reason? Maybe it is one arm64 specific issue.
> > > 
> > > BTW, bio_iov_iter_get_pages() just takes ~200us on one x86_64 VM with THP, which is
> > > observed via bcc/funclatency when running the following workload:
> > > 
> > 
> > I think you focused on bio_iov_iter_get_pages() because I just commented page
> > merge delay only. Sorry about that. I missed details of this issue.
> > Actually there are many operations during while-loop in do_direct_IO().
> > Page merge operation is just one among them. Page merge operation is called
> > by dio_send_cur_page() in while-loop. Below is call stack.
> > 
> > __bio_try_merge_page+0x4c/0x614
> > bio_add_page+0x40/0x12c
> > dio_send_cur_page+0x13c/0x374
> > submit_page_section+0xb4/0x304
> > do_direct_IO+0x3d4/0x854
> > do_blockdev_direct_IO+0x488/0xa18
> > __blockdev_direct_IO+0x30/0x3c
> > f2fs_direct_IO+0x6d0/0xb80
> > generic_file_read_iter+0x284/0x45c
> > f2fs_file_read_iter+0x3c/0xac
> > __vfs_read+0x19c/0x204
> > vfs_read+0xa4/0x144
> > 
> > 2ms delay is not only caused by page merge operation. it inculdes many the
> > other operations too. But those many operations included page merge should
> > be executed more if bio size is grow up.
> 
> OK, got it.
> 
> Then I think you can just limit bio size in dio_bio_add_page() instead of
> doing it for all.

Root cause is increasing of bio size after multipage bvec applying.
So I think basic solution is limitation of bio size.
Similar problem might be occurred in various scenario too. becasue bio merge
function is called from many case in kernel. And bio max size was 1MB before
applying of multipage bvec. So I think limitation of bio is reasonable.
I think this solution is not bad for all. Becasue it just only activated by
queue flag case by case.

> 
> -- 
> Ming
> 
> 
---
Changheun Lee
Samsung Electronics

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 2/2] bio: add limit_bio_size sysfs
  2021-01-29  3:49     ` [PATCH v4 2/2] bio: add limit_bio_size sysfs Changheun Lee
@ 2021-02-03  9:00       ` Greg KH
       [not found]         ` <CGME20210203093835epcas1p2e86a35ba3012882950abc7013cae59b9@epcas1p2.samsung.com>
  0 siblings, 1 reply; 13+ messages in thread
From: Greg KH @ 2021-02-03  9:00 UTC (permalink / raw)
  To: Changheun Lee
  Cc: hch, Johannes.Thumshirn, asml.silence, axboe, damien.lemoal,
	linux-block, linux-kernel, ming.lei, osandov, patchwork-bot, tj,
	tom.leiming, jisoo2146.oh, junho89.kim, mj0123.lee,
	seunghwan.hyun, sookwan7.kim, woosung2.lee, yt0928.kim

On Fri, Jan 29, 2021 at 12:49:09PM +0900, Changheun Lee wrote:
> Add new sysfs node to limit bio size.

You forgot to also add a new Documentation/ABI/ entry that describes
what this new sysfs file does :(

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v4 2/2] bio: add limit_bio_size sysfs
       [not found]         ` <CGME20210203093835epcas1p2e86a35ba3012882950abc7013cae59b9@epcas1p2.samsung.com>
@ 2021-02-03  9:22           ` Changheun Lee
  2021-02-03 10:06             ` Greg KH
  0 siblings, 1 reply; 13+ messages in thread
From: Changheun Lee @ 2021-02-03  9:22 UTC (permalink / raw)
  To: gregkh
  Cc: Johannes.Thumshirn, asml.silence, axboe, damien.lemoal, hch,
	jisoo2146.oh, junho89.kim, linux-block, linux-kernel, ming.lei,
	mj0123.lee, nanich.lee, osandov, patchwork-bot, seunghwan.hyun,
	sookwan7.kim, tj, tom.leiming, woosung2.lee, yt0928.kim

Add limit_bio_size block sysfs node to limit bio size.
Queue flag QUEUE_FLAG_LIMIT_BIO_SIZE will be set if limit_bio_size is set.
And bio max size will be limited by queue max sectors via
QUEUE_FLAG_LIMIT_BIO_SIZE set.

Signed-off-by: Changheun Lee <nanich.lee@samsung.com>
---
 block/blk-sysfs.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index b513f1683af0..840d97f427e6 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -288,6 +288,7 @@ QUEUE_SYSFS_BIT_FNS(nonrot, NONROT, 1);
 QUEUE_SYSFS_BIT_FNS(random, ADD_RANDOM, 0);
 QUEUE_SYSFS_BIT_FNS(iostats, IO_STAT, 0);
 QUEUE_SYSFS_BIT_FNS(stable_writes, STABLE_WRITES, 0);
+QUEUE_SYSFS_BIT_FNS(limit_bio_size, LIMIT_BIO_SIZE, 0);
 #undef QUEUE_SYSFS_BIT_FNS
 
 static ssize_t queue_zoned_show(struct request_queue *q, char *page)
@@ -615,6 +616,7 @@ QUEUE_RW_ENTRY(queue_nonrot, "rotational");
 QUEUE_RW_ENTRY(queue_iostats, "iostats");
 QUEUE_RW_ENTRY(queue_random, "add_random");
 QUEUE_RW_ENTRY(queue_stable_writes, "stable_writes");
+QUEUE_RW_ENTRY(queue_limit_bio_size, "limit_bio_size");
 
 static struct attribute *queue_attrs[] = {
 	&queue_requests_entry.attr,
@@ -648,6 +650,7 @@ static struct attribute *queue_attrs[] = {
 	&queue_rq_affinity_entry.attr,
 	&queue_iostats_entry.attr,
 	&queue_stable_writes_entry.attr,
+	&queue_limit_bio_size_entry.attr,
 	&queue_random_entry.attr,
 	&queue_poll_entry.attr,
 	&queue_wc_entry.attr,
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 2/2] bio: add limit_bio_size sysfs
  2021-02-03  9:22           ` Changheun Lee
@ 2021-02-03 10:06             ` Greg KH
       [not found]               ` <CGME20210203113650epcas1p2ea64df5b6349975fa92c1605edc92961@epcas1p2.samsung.com>
  0 siblings, 1 reply; 13+ messages in thread
From: Greg KH @ 2021-02-03 10:06 UTC (permalink / raw)
  To: Changheun Lee
  Cc: Johannes.Thumshirn, asml.silence, axboe, damien.lemoal, hch,
	jisoo2146.oh, junho89.kim, linux-block, linux-kernel, ming.lei,
	mj0123.lee, osandov, patchwork-bot, seunghwan.hyun, sookwan7.kim,
	tj, tom.leiming, woosung2.lee, yt0928.kim

On Wed, Feb 03, 2021 at 06:22:47PM +0900, Changheun Lee wrote:
> Add limit_bio_size block sysfs node to limit bio size.
> Queue flag QUEUE_FLAG_LIMIT_BIO_SIZE will be set if limit_bio_size is set.
> And bio max size will be limited by queue max sectors via
> QUEUE_FLAG_LIMIT_BIO_SIZE set.
> 
> Signed-off-by: Changheun Lee <nanich.lee@samsung.com>
> ---
>  block/blk-sysfs.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
> index b513f1683af0..840d97f427e6 100644
> --- a/block/blk-sysfs.c
> +++ b/block/blk-sysfs.c
> @@ -288,6 +288,7 @@ QUEUE_SYSFS_BIT_FNS(nonrot, NONROT, 1);
>  QUEUE_SYSFS_BIT_FNS(random, ADD_RANDOM, 0);
>  QUEUE_SYSFS_BIT_FNS(iostats, IO_STAT, 0);
>  QUEUE_SYSFS_BIT_FNS(stable_writes, STABLE_WRITES, 0);
> +QUEUE_SYSFS_BIT_FNS(limit_bio_size, LIMIT_BIO_SIZE, 0);
>  #undef QUEUE_SYSFS_BIT_FNS
>  
>  static ssize_t queue_zoned_show(struct request_queue *q, char *page)
> @@ -615,6 +616,7 @@ QUEUE_RW_ENTRY(queue_nonrot, "rotational");
>  QUEUE_RW_ENTRY(queue_iostats, "iostats");
>  QUEUE_RW_ENTRY(queue_random, "add_random");
>  QUEUE_RW_ENTRY(queue_stable_writes, "stable_writes");
> +QUEUE_RW_ENTRY(queue_limit_bio_size, "limit_bio_size");
>  
>  static struct attribute *queue_attrs[] = {
>  	&queue_requests_entry.attr,
> @@ -648,6 +650,7 @@ static struct attribute *queue_attrs[] = {
>  	&queue_rq_affinity_entry.attr,
>  	&queue_iostats_entry.attr,
>  	&queue_stable_writes_entry.attr,
> +	&queue_limit_bio_size_entry.attr,

Still no documentation for this new file?

That's not allowed :(

greg k-h

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2/2] bio: add limit_bio_size sysfs
       [not found]               ` <CGME20210203113650epcas1p2ea64df5b6349975fa92c1605edc92961@epcas1p2.samsung.com>
@ 2021-02-03 11:21                 ` Changheun Lee
  2021-02-03 13:03                   ` Greg KH
  0 siblings, 1 reply; 13+ messages in thread
From: Changheun Lee @ 2021-02-03 11:21 UTC (permalink / raw)
  To: gregkh
  Cc: Johannes.Thumshirn, asml.silence, axboe, damien.lemoal, hch,
	jisoo2146.oh, junho89.kim, linux-block, linux-kernel, ming.lei,
	mj0123.lee, nanich.lee, osandov, patchwork-bot, seunghwan.hyun,
	sookwan7.kim, tj, tom.leiming, woosung2.lee, yt0928.kim

Add limit_bio_size block sysfs node to limit bio size.
Queue flag QUEUE_FLAG_LIMIT_BIO_SIZE will be set if limit_bio_size is set.
And bio max size will be limited by queue max sectors via
QUEUE_FLAG_LIMIT_BIO_SIZE set.

Signed-off-by: Changheun Lee <nanich.lee@samsung.com>
Signed-off-by: nanich.lee <nanich.lee@samsung.com>
---
 Documentation/ABI/testing/sysfs-block | 10 ++++++++++
 Documentation/block/queue-sysfs.rst   |  7 +++++++
 block/blk-sysfs.c                     |  3 +++
 3 files changed, 20 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-block b/Documentation/ABI/testing/sysfs-block
index e34cdeeeb9d4..86a7b15410cf 100644
--- a/Documentation/ABI/testing/sysfs-block
+++ b/Documentation/ABI/testing/sysfs-block
@@ -316,3 +316,13 @@ Description:
 		does not complete in this time then the block driver timeout
 		handler is invoked. That timeout handler can decide to retry
 		the request, to fail it or to start a device recovery strategy.
+
+What:		/sys/block/<disk>/queue/limit_bio_size
+Date:		Feb, 2021
+Contact:	Changheun Lee <nanich.lee@samsung.com>
+Description:
+		(RW) Toggle for set/clear QUEUE_FLAG_LIMIT_BIO_SIZE queue flag.
+		Queue flag QUEUE_FLAG_LIMIT_BIO_SIZE will be set if limit_bio_size
+		is set. And bio max size will be limited by queue max sectors.
+		QUEUE_FLAG_LIMIT_BIO_SIZE will be cleared if limit_bio_size is
+		cleard. And limit of bio max size will be cleard.
diff --git a/Documentation/block/queue-sysfs.rst b/Documentation/block/queue-sysfs.rst
index 2638d3446b79..cd371a821855 100644
--- a/Documentation/block/queue-sysfs.rst
+++ b/Documentation/block/queue-sysfs.rst
@@ -273,4 +273,11 @@ devices are described in the ZBC (Zoned Block Commands) and ZAC
 do not support zone commands, they will be treated as regular block devices
 and zoned will report "none".
 
+limit_bio_size (RW)
+-------------------
+This indicates QUEUE_FLAG_LIMIT_BIO_SIZE queue flag value. And
+QUEUE_FLAG_LIMIT_BIO_SIZE can be changed via set(1)/clear(0) this node.
+bio max size will be limited by queue max sectors via set this node. And
+limit of bio max size will be cleard via clear this node.
+
 Jens Axboe <jens.axboe@oracle.com>, February 2009
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index b513f1683af0..840d97f427e6 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -288,6 +288,7 @@ QUEUE_SYSFS_BIT_FNS(nonrot, NONROT, 1);
 QUEUE_SYSFS_BIT_FNS(random, ADD_RANDOM, 0);
 QUEUE_SYSFS_BIT_FNS(iostats, IO_STAT, 0);
 QUEUE_SYSFS_BIT_FNS(stable_writes, STABLE_WRITES, 0);
+QUEUE_SYSFS_BIT_FNS(limit_bio_size, LIMIT_BIO_SIZE, 0);
 #undef QUEUE_SYSFS_BIT_FNS
 
 static ssize_t queue_zoned_show(struct request_queue *q, char *page)
@@ -615,6 +616,7 @@ QUEUE_RW_ENTRY(queue_nonrot, "rotational");
 QUEUE_RW_ENTRY(queue_iostats, "iostats");
 QUEUE_RW_ENTRY(queue_random, "add_random");
 QUEUE_RW_ENTRY(queue_stable_writes, "stable_writes");
+QUEUE_RW_ENTRY(queue_limit_bio_size, "limit_bio_size");
 
 static struct attribute *queue_attrs[] = {
 	&queue_requests_entry.attr,
@@ -648,6 +650,7 @@ static struct attribute *queue_attrs[] = {
 	&queue_rq_affinity_entry.attr,
 	&queue_iostats_entry.attr,
 	&queue_stable_writes_entry.attr,
+	&queue_limit_bio_size_entry.attr,
 	&queue_random_entry.attr,
 	&queue_poll_entry.attr,
 	&queue_wc_entry.attr,
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] bio: add limit_bio_size sysfs
  2021-02-03 11:21                 ` [PATCH " Changheun Lee
@ 2021-02-03 13:03                   ` Greg KH
  0 siblings, 0 replies; 13+ messages in thread
From: Greg KH @ 2021-02-03 13:03 UTC (permalink / raw)
  To: Changheun Lee
  Cc: Johannes.Thumshirn, asml.silence, axboe, damien.lemoal, hch,
	jisoo2146.oh, junho89.kim, linux-block, linux-kernel, ming.lei,
	mj0123.lee, osandov, patchwork-bot, seunghwan.hyun, sookwan7.kim,
	tj, tom.leiming, woosung2.lee, yt0928.kim

On Wed, Feb 03, 2021 at 08:21:07PM +0900, Changheun Lee wrote:
> Add limit_bio_size block sysfs node to limit bio size.
> Queue flag QUEUE_FLAG_LIMIT_BIO_SIZE will be set if limit_bio_size is set.
> And bio max size will be limited by queue max sectors via
> QUEUE_FLAG_LIMIT_BIO_SIZE set.
> 
> Signed-off-by: Changheun Lee <nanich.lee@samsung.com>
> Signed-off-by: nanich.lee <nanich.lee@samsung.com>
> ---
>  Documentation/ABI/testing/sysfs-block | 10 ++++++++++
>  Documentation/block/queue-sysfs.rst   |  7 +++++++
>  block/blk-sysfs.c                     |  3 +++
>  3 files changed, 20 insertions(+)


Hi,

This is the friendly patch-bot of Greg Kroah-Hartman.  You have sent him
a patch that has triggered this response.  He used to manually respond
to these common problems, but in order to save his sanity (he kept
writing the same thing over and over, yet to different people), I was
created.  Hopefully you will not take offence and will fix the problem
in your patch and resubmit it so that it can be accepted into the Linux
kernel tree.

You are receiving this message because of the following common error(s)
as indicated below:

- This looks like a new version of a previously submitted patch, but you
  did not list below the --- line any changes from the previous version.
  Please read the section entitled "The canonical patch format" in the
  kernel file, Documentation/SubmittingPatches for what needs to be done
  here to properly describe this.

If you wish to discuss this problem further, or you have questions about
how to resolve this issue, please feel free to respond to this email and
Greg will reply once he has dug out from the pending patches received
from other developers.

thanks,

greg k-h's patch email bot

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-02-03 13:04 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CGME20210129040447epcas1p4531f0bf1ddebf0b469af87e85199cc43@epcas1p4.samsung.com>
2021-01-29  3:49 ` [PATCH v4 1/2] bio: limit bio max size Changheun Lee
     [not found]   ` <CGME20210129040448epcas1p44da94c82ac4fb82d77c452bda56d6b09@epcas1p4.samsung.com>
2021-01-29  3:49     ` [PATCH v4 2/2] bio: add limit_bio_size sysfs Changheun Lee
2021-02-03  9:00       ` Greg KH
     [not found]         ` <CGME20210203093835epcas1p2e86a35ba3012882950abc7013cae59b9@epcas1p2.samsung.com>
2021-02-03  9:22           ` Changheun Lee
2021-02-03 10:06             ` Greg KH
     [not found]               ` <CGME20210203113650epcas1p2ea64df5b6349975fa92c1605edc92961@epcas1p2.samsung.com>
2021-02-03 11:21                 ` [PATCH " Changheun Lee
2021-02-03 13:03                   ` Greg KH
2021-01-29  7:23   ` [PATCH v4 1/2] bio: limit bio max size Ming Lei
     [not found]   ` <CGME20210201030830epcas1p402e8a088fb16af9fbbb130b152e097f1@epcas1p4.samsung.com>
2021-02-01  2:52     ` Changheun Lee
2021-02-01  7:14       ` Ming Lei
     [not found]         ` <CGME20210202042747epcas1p2054df120098b3130cb104cf8e4731797@epcas1p2.samsung.com>
2021-02-02  4:12           ` Changheun Lee
2021-02-03  3:40             ` Ming Lei
     [not found]               ` <CGME20210203054543epcas1p454a7520710a1ae066237375691db90fd@epcas1p4.samsung.com>
2021-02-03  5:30                 ` Changheun Lee

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).