linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] ext4: avoid s_mb_prefetch to be zero in individual scenarios
@ 2020-12-04  3:05 Chunguang Xu
  2020-12-17  6:14 ` Theodore Y. Ts'o
  0 siblings, 1 reply; 2+ messages in thread
From: Chunguang Xu @ 2020-12-04  3:05 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: bzzz, linux-ext4

From: Chunguang Xu <brookxu@tencent.com>

patch cfd7323 introduces block bitmap prefetch, and expects to read
block bitmaps of flex_bg through an IO. However, it seems to ignore
the value range of s_log_groups_per_flex. In the scenario where the
value of s_log_groups_per_flex is greater than 27, s_mb_prefetch or
s_mb_prefetch_limit will overflow, cause a divide zero exception.

In addition, the logic of calculating nr maybe also flawed, because
the size of flexbg is fixed during a single mount, but s_mb_prefetch
can be modified, which causes nr to fail to meet the value condition
of [1, flexbg_size].

 PID: 3873   TASK: ffff88800f11d880  CPU: 2   COMMAND: "executor.4"
 #0 [ffff8880114a6ec0] __show_regs.cold.7 at ffffffff83cf29e2
 #1 [ffff8880114a6f40] do_trap at ffffffff81065c61
 #2 [ffff8880114a6f98] do_error_trap at ffffffff81065d65
 #3 [ffff8880114a6fe0] exc_divide_error at ffffffff83dd2fd4
 #4 [ffff8880114a7000] asm_exc_divide_error at ffffffff83e00872
    [exception RIP: ext4_mb_regular_allocator+3885]
    RIP: ffffffff8191258d  RSP: ffff8880114a70b8  RFLAGS: 00010246
    RAX: 0000000000000000  RBX: 0000000000000000  RCX: ffffffff8191257a
    RDX: 0000000000000000  RSI: 0000000000000000  RDI: 0000000000000005
    RBP: 0000000000000200   R8: ffff88800f11d880   R9: ffffed1001e23b11
    R10: ffff88800f11d887  R11: ffffed1001e23b10  R12: ffff888010147000
    R13: 0000000000000000  R14: 0000000000000002  R15: dffffc0000000000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #5 [ffff8880114a7260] ext4_mb_new_blocks at ffffffff8191b6ba
 #6 [ffff8880114a7420] ext4_new_meta_blocks at ffffffff81870d6f
 #7 [ffff8880114a74e8] ext4_xattr_block_set at ffffffff819ced37
 #8 [ffff8880114a7758] ext4_xattr_set_handle at ffffffff819d4776
 #9 [ffff8880114a7928] ext4_xattr_set at ffffffff819d501b
    RIP: 000000000045eb29  RSP: 00007ff74e97bc38  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 000000000055bf00  RCX: 000000000045eb29
    RDX: 00000000200000c0  RSI: 0000000020000080  RDI: 0000000020000040
    RBP: 00000000004b068e   R8: 0000000000000001   R9: 0000000000000000
    R10: 0000000000000002  R11: 0000000000000246  R12: 000000000055bf00
    R13: 00007fff50fc111f  R14: 00007ff74e97bdc0  R15: 0000000000022000
    ORIG_RAX: 00000000000000bc  CS: 0033  SS: 002b

To solve this problem, we need to set the upper limit of s_mb_prefetch.
Since we expect to load block bitmaps of a flex_bg through an IO, we
can consider determining a reasonable upper limit among the IO limit
parameters. Comprehensive consideration, we take BLK_MAX_SEGMENT_SIZE.
This maybe a good choice to solve divide zero problem and avoiding
performance degradation.

Reported-by: Tosk Robot <tencent_os_robot@tencent.com>
Signed-off-by: Chunguang Xu <brookxu@tencent.com>
Reviewed-by: Samuel Liao <samuelliao@tencent.com>
Reviewed-by: Andreas Dilger <adilger@dilger.ca>
---
 fs/ext4/mballoc.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 24af9ed..e3ea7af 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -2395,9 +2395,10 @@ void ext4_mb_prefetch_fini(struct super_block *sb, ext4_group_t group,
 
 				nr = sbi->s_mb_prefetch;
 				if (ext4_has_feature_flex_bg(sb)) {
-					nr = (group / sbi->s_mb_prefetch) *
-						sbi->s_mb_prefetch;
-					nr = nr + sbi->s_mb_prefetch - group;
+					nr = 1 << sbi->s_log_groups_per_flex;
+					if (group & (nr - 1))
+						nr -= group & (nr - 1);
+					nr = min(nr, sbi->s_mb_prefetch);
 				}
 				prefetch_grp = ext4_mb_prefetch(sb, group,
 							nr, &prefetch_ios);
@@ -2732,8 +2733,12 @@ static int ext4_mb_init_backend(struct super_block *sb)
 	}
 
 	if (ext4_has_feature_flex_bg(sb)) {
+		int len;
+
 		/* a single flex group is supposed to be read by a single IO */
-		sbi->s_mb_prefetch = 1 << sbi->s_es->s_log_groups_per_flex;
+		len = min(BLK_MAX_SEGMENT_SIZE >> (sb->s_blocksize_bits - 9),
+			  1 << sbi->s_es->s_log_groups_per_flex);
+		sbi->s_mb_prefetch = len;
 		sbi->s_mb_prefetch *= 8; /* 8 prefetch IOs in flight at most */
 	} else {
 		sbi->s_mb_prefetch = 32;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH v2] ext4: avoid s_mb_prefetch to be zero in individual scenarios
  2020-12-04  3:05 [PATCH v2] ext4: avoid s_mb_prefetch to be zero in individual scenarios Chunguang Xu
@ 2020-12-17  6:14 ` Theodore Y. Ts'o
  0 siblings, 0 replies; 2+ messages in thread
From: Theodore Y. Ts'o @ 2020-12-17  6:14 UTC (permalink / raw)
  To: Chunguang Xu; +Cc: adilger.kernel, bzzz, linux-ext4

I cleaned up the commit description and the code slightly; it doesn't
change the generated object but makes the code a bit more concise and
easier to read:

commit 8333bd298d915a2d1c01cbed9287d284aaa04bb1
Author: Chunguang Xu <brookxu@tencent.com>
Date:   Fri Dec 4 11:05:43 2020 +0800

    ext4: avoid s_mb_prefetch to be zero in individual scenarios
    
    Commit cfd732377221 ("ext4: add prefetching for block allocation
    bitmaps") introduced block bitmap prefetch, and expects to read block
    bitmaps of flex_bg through an IO.  However, it seems to ignore the
    value range of s_log_groups_per_flex.  In the scenario where the value
    of s_log_groups_per_flex is greater than 27, s_mb_prefetch or
    s_mb_prefetch_limit will overflow, cause a divide zero exception.
    
    In addition, the logic of calculating nr is also flawed, because the
    size of flexbg is fixed during a single mount, but s_mb_prefetch can
    be modified, which causes nr to fail to meet the value condition of
    [1, flexbg_size].
    
    To solve this problem, we need to set the upper limit of
    s_mb_prefetch.  Since we expect to load block bitmaps of a flex_bg
    through an IO, we can consider determining a reasonable upper limit
    among the IO limit parameters.  After consideration, we chose
    BLK_MAX_SEGMENT_SIZE.  This is a good choice to solve divide zero
    problem and avoiding performance degradation.
    
    [ Some minor code simplifications to make the changes easy to follow -- TYT ]
    
    Reported-by: Tosk Robot <tencent_os_robot@tencent.com>
    Signed-off-by: Chunguang Xu <brookxu@tencent.com>
    Reviewed-by: Samuel Liao <samuelliao@tencent.com>
    Reviewed-by: Andreas Dilger <adilger@dilger.ca>
    Link: https://lore.kernel.org/r/1607051143-24508-1-git-send-email-brookxu@tencent.com
    Signed-off-by: Theodore Ts'o <tytso@mit.edu>

diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 77815cd110b2..99bf091fee10 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -2372,9 +2372,9 @@ ext4_mb_regular_allocator(struct ext4_allocation_context *ac)
 
 				nr = sbi->s_mb_prefetch;
 				if (ext4_has_feature_flex_bg(sb)) {
-					nr = (group / sbi->s_mb_prefetch) *
-						sbi->s_mb_prefetch;
-					nr = nr + sbi->s_mb_prefetch - group;
+					nr = 1 << sbi->s_log_groups_per_flex;
+					nr -= group & (nr - 1);
+					nr = min(nr, sbi->s_mb_prefetch);
 				}
 				prefetch_grp = ext4_mb_prefetch(sb, group,
 							nr, &prefetch_ios);
@@ -2710,7 +2710,8 @@ static int ext4_mb_init_backend(struct super_block *sb)
 
 	if (ext4_has_feature_flex_bg(sb)) {
 		/* a single flex group is supposed to be read by a single IO */
-		sbi->s_mb_prefetch = 1 << sbi->s_es->s_log_groups_per_flex;
+		sbi->s_mb_prefetch = min(1 << sbi->s_es->s_log_groups_per_flex,
+			BLK_MAX_SEGMENT_SIZE >> (sb->s_blocksize_bits - 9));
 		sbi->s_mb_prefetch *= 8; /* 8 prefetch IOs in flight at most */
 	} else {
 		sbi->s_mb_prefetch = 32;

^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-12-17  6:14 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-04  3:05 [PATCH v2] ext4: avoid s_mb_prefetch to be zero in individual scenarios Chunguang Xu
2020-12-17  6:14 ` Theodore Y. Ts'o

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).