linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* kernel bug at fs/ext4/resize.c:409
@ 2014-02-03 18:26 Jon Bernard
  2014-02-03 18:56 ` Theodore Ts'o
  2014-02-16  2:35 ` [PATCH] ext4: fix online resize with very large inode tables Theodore Ts'o
  0 siblings, 2 replies; 13+ messages in thread
From: Jon Bernard @ 2014-02-03 18:26 UTC (permalink / raw)
  To: linux-ext4

Hello all,

A coworker is seeing the following bug when attempting to resize a root
volume (during init by calling resizefs) from 1GB to the size of the
underlying partition size of 20GB.

If the partition size is changed (to i.e. 10GB), the bug seems to not
trigger.  I have access to this machine, if there are any experiments
that would provide more useful information - please let me know.

    [  112.136896] ext4-fs (sda1): resizing filesystem from 1080688 to 5243214 blocs
    [  112.164072] ------------[ cut here ]------------
    [  112.164179] kernel bug at fs/ext4/resize.c:409!
    [  112.164285] invalid opcode: 0000 [#1] smp 
    [  112.164488] modules linked in: openvswitch vxlan ip_tunnel gre libcrc32c noui
    [  112.165042] cpu: 0 pid: 968 comm: resize2fs tainted: g          i  3.12.9-301
    [  112.165042] hardware name: dell inc. optiplex 760                 /0m858n, b9
    [  112.165042] task: ffff8800b7969080 ti: ffff8800b74f4000 task.ti: ffff8800b740
    [  112.165042] rip: 0010:[<ffffffff81254fa1>]  [<ffffffff81254fa1>] set_flexbg_0
    [  112.165042] rsp: 0018:ffff8800b74f5c28  eflags: 00010216
    [  112.165042] rax: ffff8800b743bf00 rbx: ffff88007fae9000 rcx: 0000000000001000
    [  112.165042] rdx: ffff88007f4e2c00 rsi: 0000000000000001 rdi: 0000000000000010
    [  112.165042] rbp: ffff8800b74f5c70 r08: ffff8800afd27750 r09: ffff8800b7daee00
    [  112.165042] r10: 0000000000000000 r11: ffff8800afd27750 r12: 0000000000188000
    [  112.165042] r13: 0000000000000010 r14: 0000000000188000 r15: ffff8800b743b800
    [  112.165042] fs:  00007f858a7f4780(0000) gs:ffff8800be800000(0000) knlgs:00000
    [  112.165042] cs:  0010 ds: 0000 es: 0000 cr0: 000000008005003b
    [  112.165042] cr2: 00007f8589c1eea6 cr3: 00000000b7979000 cr4: 00000000000407f0
    [  112.165042] stack:
    [  112.165042]  ffff8800afd27750 ffff8800b96cb060 ffff880000188000 000000108125b
    [  112.165042]  0000000000000010 ffff88007f4e2ed0 00000000000007ff ffff8800b7430
    [  112.165042]  ffff8800b743b800 ffff8800b74f5d68 ffffffff81256768 0000000000180
    [  112.165042] call trace:
    [  112.165042]  [<ffffffff81256768>] ext4_flex_group_add+0x1448/0x1830
    [  112.165042]  [<ffffffff81257de2>] ext4_resize_fs+0x7b2/0xe80
    [  112.165042]  [<ffffffff8123ac50>] ext4_ioctl+0xbf0/0xf00
    [  112.165042]  [<ffffffff811c111d>] do_vfs_ioctl+0x2dd/0x4b0
    [  112.165042]  [<ffffffff811b9df2>] ? final_putname+0x22/0x50
    [  112.165042]  [<ffffffff811c1371>] sys_ioctl+0x81/0xa0
    [  112.165042]  [<ffffffff81676aa9>] system_call_fastpath+0x16/0x1b
    [  112.165042] code: c8 4c 89 df e8 41 96 f8 ff 44 89 e8 49 01 c4 44 29 6d d4 0 
    [  112.165042] rip  [<ffffffff81254fa1>] set_flexbg_block_bitmap+0x171/0x180
    [  112.165042]  rsp <ffff8800b74f5c28>
    [  112.175633] ---[ end trace f179f994a575df06 ]---

-- 
Jon

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: kernel bug at fs/ext4/resize.c:409
  2014-02-03 18:26 kernel bug at fs/ext4/resize.c:409 Jon Bernard
@ 2014-02-03 18:56 ` Theodore Ts'o
  2014-02-06 21:08   ` Jon Bernard
  2014-02-16  2:35 ` [PATCH] ext4: fix online resize with very large inode tables Theodore Ts'o
  1 sibling, 1 reply; 13+ messages in thread
From: Theodore Ts'o @ 2014-02-03 18:56 UTC (permalink / raw)
  To: linux-ext4

On Mon, Feb 03, 2014 at 01:26:34PM -0500, Jon Bernard wrote:
> Hello all,
> 
> A coworker is seeing the following bug when attempting to resize a root
> volume (during init by calling resizefs) from 1GB to the size of the
> underlying partition size of 20GB.
> 
> If the partition size is changed (to i.e. 10GB), the bug seems to not
> trigger.  I have access to this machine, if there are any experiments
> that would provide more useful information - please let me know.

Here are three questions to start:

1)  What kernel version was this oops coming from?

2)  Could you please send me the output of dumpe2fs of the file system?

3)  Can you reproduce the problem?

						- Ted

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: kernel bug at fs/ext4/resize.c:409
  2014-02-03 18:56 ` Theodore Ts'o
@ 2014-02-06 21:08   ` Jon Bernard
  2014-02-13 13:24     ` Dmitry Monakhov
  0 siblings, 1 reply; 13+ messages in thread
From: Jon Bernard @ 2014-02-06 21:08 UTC (permalink / raw)
  To: linux-ext4

[-- Attachment #1: Type: text/plain, Size: 1227 bytes --]

* Theodore Ts'o <tytso@mit.edu> wrote:
> On Mon, Feb 03, 2014 at 01:26:34PM -0500, Jon Bernard wrote:
> > Hello all,
> > 
> > A coworker is seeing the following bug when attempting to resize a root
> > volume (during init by calling resizefs) from 1GB to the size of the
> > underlying partition size of 20GB.
> > 
> > If the partition size is changed (to i.e. 10GB), the bug seems to not
> > trigger.  I have access to this machine, if there are any experiments
> > that would provide more useful information - please let me know.
> 
> Here are three questions to start:
> 
> 1)  What kernel version was this oops coming from?

3.12.9-301.fc20.x86_64

> 2)  Could you please send me the output of dumpe2fs of the file system?

Dump attached.

> 3)  Can you reproduce the problem?

It happens every time with this particular filesystem image.  A new
image built with slightly different variables (size, contents, etc)
usually yields a filesystem that behaves correctly.  But once they have
a bad one, it breaks on resize every time.

Let me know if I can provide any other information.  I have access to
the machine for some time, so I can run a modified kernel or module and
post results if that would help.

Thanks,

-- 
Jon

[-- Attachment #2: dump.txt --]
[-- Type: text/plain, Size: 14952 bytes --]

Filesystem volume name:   cloudimg-rootfs
Last mounted on:          /
Filesystem UUID:          942e3be7-4655-4250-8de4-7e6ffb5595e2
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              1080816
Block count:              1080688
Reserved block count:     54034
Free blocks:              338604
Free inodes:              948273
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      1024
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         32752
Inode blocks per group:   2047
Flex block group size:    16
Filesystem created:       Mon Feb  3 15:25:08 2014
Last mount time:          Thu Feb  6 18:41:42 2014
Last write time:          Thu Feb  6 18:41:41 2014
Mount count:              18
Maximum mount count:      -1
Last checked:             Mon Feb  3 15:25:08 2014
Check interval:           0 (<none>)
Lifetime writes:          69 MB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:	          256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
First orphan inode:       19343
Default directory hash:   half_md4
Directory Hash Seed:      8f043eea-0268-4208-9167-34286deca46f
Journal backup:           inode blocks
Journal features:         journal_incompat_revoke
Journal size:             64M
Journal length:           16384
Journal sequence:         0x00000030
Journal start:            1


Group 0: (Blocks 0-32767) [ITABLE_ZEROED]
  Checksum 0xf064, unused inodes 0
  Primary superblock at 0, Group descriptors at 1-1
  Reserved GDT blocks at 2-1025
  Block bitmap at 1026 (+1026), Inode bitmap at 1042 (+1042)
  Inode table at 1058-3104 (+1058)
  0 free blocks, 0 free inodes, 5537 directories
  Free blocks: 
  Free inodes: 
Group 1: (Blocks 32768-65535) [ITABLE_ZEROED]
  Checksum 0xfcf6, unused inodes 12736
  Backup superblock at 32768, Group descriptors at 32769-32769
  Reserved GDT blocks at 32770-33793
  Block bitmap at 1027 (bg #0 + 1027), Inode bitmap at 1043 (bg #0 + 1043)
  Inode table at 3105-5151 (bg #0 + 3105)
  0 free blocks, 12736 free inodes, 565 directories, 12736 unused inodes
  Free blocks: 
  Free inodes: 52769-65504
Group 2: (Blocks 65536-98303) [ITABLE_ZEROED]
  Checksum 0x2191, unused inodes 30532
  Block bitmap at 1028 (bg #0 + 1028), Inode bitmap at 1044 (bg #0 + 1044)
  Inode table at 5152-7198 (bg #0 + 5152)
  0 free blocks, 30532 free inodes, 2220 directories, 30532 unused inodes
  Free blocks: 
  Free inodes: 67725-98256
Group 3: (Blocks 98304-131071) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0xec2b, unused inodes 32752
  Backup superblock at 98304, Group descriptors at 98305-98305
  Reserved GDT blocks at 98306-99329
  Block bitmap at 1029 (bg #0 + 1029), Inode bitmap at 1045 (bg #0 + 1045)
  Inode table at 7199-9245 (bg #0 + 7199)
  0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 
  Free inodes: 98257-131008
Group 4: (Blocks 131072-163839) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0xf126, unused inodes 32752
  Block bitmap at 1030 (bg #0 + 1030), Inode bitmap at 1046 (bg #0 + 1046)
  Inode table at 9246-11292 (bg #0 + 9246)
  132 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 157084-157087, 157184-157311
  Free inodes: 131009-163760
Group 5: (Blocks 163840-196607) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0x374c, unused inodes 32752
  Backup superblock at 163840, Group descriptors at 163841-163841
  Reserved GDT blocks at 163842-164865
  Block bitmap at 1031 (bg #0 + 1031), Inode bitmap at 1047 (bg #0 + 1047)
  Inode table at 11293-13339 (bg #0 + 11293)
  0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 
  Free inodes: 163761-196512
Group 6: (Blocks 196608-229375) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0xa529, unused inodes 32752
  Block bitmap at 1032 (bg #0 + 1032), Inode bitmap at 1048 (bg #0 + 1048)
  Inode table at 13340-15386 (bg #0 + 13340)
  0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 
  Free inodes: 196513-229264
Group 7: (Blocks 229376-262143) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0x496e, unused inodes 32752
  Backup superblock at 229376, Group descriptors at 229377-229377
  Reserved GDT blocks at 229378-230401
  Block bitmap at 1033 (bg #0 + 1033), Inode bitmap at 1049 (bg #0 + 1049)
  Inode table at 15387-17433 (bg #0 + 15387)
  0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 
  Free inodes: 229265-262016
Group 8: (Blocks 262144-294911) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0xa147, unused inodes 32752
  Block bitmap at 1034 (bg #0 + 1034), Inode bitmap at 1050 (bg #0 + 1050)
  Inode table at 17434-19480 (bg #0 + 17434)
  0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 
  Free inodes: 262017-294768
Group 9: (Blocks 294912-327679) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0x06e4, unused inodes 32752
  Backup superblock at 294912, Group descriptors at 294913-294913
  Reserved GDT blocks at 294914-295937
  Block bitmap at 1035 (bg #0 + 1035), Inode bitmap at 1051 (bg #0 + 1051)
  Inode table at 19481-21527 (bg #0 + 19481)
  2 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 305491-305492
  Free inodes: 294769-327520
Group 10: (Blocks 327680-360447) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0x53e4, unused inodes 32752
  Block bitmap at 1036 (bg #0 + 1036), Inode bitmap at 1052 (bg #0 + 1052)
  Inode table at 21528-23574 (bg #0 + 21528)
  0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 
  Free inodes: 327521-360272
Group 11: (Blocks 360448-393215) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0x2764, unused inodes 32752
  Block bitmap at 1037 (bg #0 + 1037), Inode bitmap at 1053 (bg #0 + 1053)
  Inode table at 23575-25621 (bg #0 + 23575)
  52 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 385319, 385339, 385473, 386011, 386047, 386055, 386065, 386103, 386121, 386141, 386471, 386603, 386615, 386639, 386647, 386679, 386705, 386723, 386731, 386761, 386785, 386797, 386993, 387001, 387023, 387321, 388387, 388555, 388683, 388717, 388743, 389071, 389107, 389155, 389173, 389195, 389295, 389735, 390187, 390223, 390783, 390945, 390987, 391015, 391127, 392079, 392581, 392593, 392611, 392903, 393027, 393087
  Free inodes: 360273-393024
Group 12: (Blocks 393216-425983) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0xb54a, unused inodes 32752
  Block bitmap at 1038 (bg #0 + 1038), Inode bitmap at 1054 (bg #0 + 1054)
  Inode table at 25622-27668 (bg #0 + 25622)
  10189 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 397607, 397623, 397635, 397743, 398211, 398543, 398591, 398717, 398759, 399111, 399263, 399343, 400071, 400143, 400203, 400211, 400219, 400241, 400319, 402071, 402379, 402431, 402731, 402763, 402771, 402839, 402859, 402867, 403057, 403079, 403091, 403147, 403433-403455, 415819-416255, 416271, 416273, 416277-416280, 416285-416287, 416291-416313, 416319-425983
  Free inodes: 393025-425776
Group 13: (Blocks 425984-458751) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
  Checksum 0xedd5, unused inodes 32752
  Block bitmap at 1039 (bg #0 + 1039), Inode bitmap at 1055 (bg #0 + 1055)
  Inode table at 27669-29715 (bg #0 + 27669)
  32768 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 425984-458751
  Free inodes: 425777-458528
Group 14: (Blocks 458752-491519) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
  Checksum 0x3f13, unused inodes 32752
  Block bitmap at 1040 (bg #0 + 1040), Inode bitmap at 1056 (bg #0 + 1056)
  Inode table at 29716-31762 (bg #0 + 29716)
  32768 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 458752-491519
  Free inodes: 458529-491280
Group 15: (Blocks 491520-524287) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0x5e51, unused inodes 32752
  Block bitmap at 1041 (bg #0 + 1041), Inode bitmap at 1057 (bg #0 + 1057)
  Inode table at 33794-35840 (bg #1 + 1026)
  16384 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 507904-524287
  Free inodes: 491281-524032
Group 16: (Blocks 524288-557055) [ITABLE_ZEROED]
  Checksum 0x7c5f, unused inodes 0
  Block bitmap at 524288 (+0), Inode bitmap at 524304 (+16)
  Inode table at 524320-526366 (+32)
  0 free blocks, 0 free inodes, 2275 directories
  Free blocks: 
  Free inodes: 
Group 17: (Blocks 557056-589823) [ITABLE_ZEROED]
  Checksum 0x1b57, unused inodes 0
  Block bitmap at 524289 (bg #16 + 1), Inode bitmap at 524305 (bg #16 + 17)
  Inode table at 526367-528413 (bg #16 + 2079)
  0 free blocks, 0 free inodes, 3615 directories
  Free blocks: 
  Free inodes: 
Group 18: (Blocks 589824-622591) [ITABLE_ZEROED]
  Checksum 0x2677, unused inodes 22195
  Block bitmap at 524290 (bg #16 + 2), Inode bitmap at 524306 (bg #16 + 18)
  Inode table at 528414-530460 (bg #16 + 4126)
  0 free blocks, 22196 free inodes, 826 directories, 22195 unused inodes
  Free blocks: 
  Free inodes: 600087, 600094-622288
Group 19: (Blocks 622592-655359) [ITABLE_ZEROED]
  Checksum 0x89be, unused inodes 31008
  Block bitmap at 524291 (bg #16 + 3), Inode bitmap at 524307 (bg #16 + 19)
  Inode table at 530461-532507 (bg #16 + 6173)
  0 free blocks, 31008 free inodes, 1744 directories, 31008 unused inodes
  Free blocks: 
  Free inodes: 624033-655040
Group 20: (Blocks 655360-688127) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0x3b1a, unused inodes 32752
  Block bitmap at 524292 (bg #16 + 4), Inode bitmap at 524308 (bg #16 + 20)
  Inode table at 532508-534554 (bg #16 + 8220)
  0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 
  Free inodes: 655041-687792
Group 21: (Blocks 688128-720895) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0xd75d, unused inodes 32752
  Block bitmap at 524293 (bg #16 + 5), Inode bitmap at 524309 (bg #16 + 21)
  Inode table at 534555-536601 (bg #16 + 10267)
  0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 
  Free inodes: 687793-720544
Group 22: (Blocks 720896-753663) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0x051e, unused inodes 32752
  Block bitmap at 524294 (bg #16 + 6), Inode bitmap at 524310 (bg #16 + 22)
  Inode table at 536602-538648 (bg #16 + 12314)
  0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 
  Free inodes: 720545-753296
Group 23: (Blocks 753664-786431) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0x1a1c, unused inodes 32752
  Block bitmap at 524295 (bg #16 + 7), Inode bitmap at 524311 (bg #16 + 23)
  Inode table at 538649-540695 (bg #16 + 14361)
  0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 
  Free inodes: 753297-786048
Group 24: (Blocks 786432-819199) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0xb213, unused inodes 32752
  Block bitmap at 524296 (bg #16 + 8), Inode bitmap at 524312 (bg #16 + 24)
  Inode table at 540696-542742 (bg #16 + 16408)
  0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 
  Free inodes: 786049-818800
Group 25: (Blocks 819200-851967) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0x79b0, unused inodes 32752
  Backup superblock at 819200, Group descriptors at 819201-819201
  Reserved GDT blocks at 819202-820225
  Block bitmap at 524297 (bg #16 + 9), Inode bitmap at 524313 (bg #16 + 25)
  Inode table at 542743-544789 (bg #16 + 18455)
  20736 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 827392-835583, 839075-839091, 839140-839536, 839809-840191, 840193-840196, 840198, 840201, 840207, 840214-840217, 840219-840221, 840223-840231, 840237, 840239, 840246-851967
  Free inodes: 818801-851552
Group 26: (Blocks 851968-884735) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
  Checksum 0x0506, unused inodes 32752
  Block bitmap at 524298 (bg #16 + 10), Inode bitmap at 524314 (bg #16 + 26)
  Inode table at 544790-546836 (bg #16 + 20502)
  32768 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 851968-884735
  Free inodes: 851553-884304
Group 27: (Blocks 884736-917503) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0x049f, unused inodes 32752
  Backup superblock at 884736, Group descriptors at 884737-884737
  Reserved GDT blocks at 884738-885761
  Block bitmap at 524299 (bg #16 + 11), Inode bitmap at 524315 (bg #16 + 27)
  Inode table at 546837-548883 (bg #16 + 22549)
  31742 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 885762-917503
  Free inodes: 884305-917056
Group 28: (Blocks 917504-950271) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
  Checksum 0xe183, unused inodes 32752
  Block bitmap at 524300 (bg #16 + 12), Inode bitmap at 524316 (bg #16 + 28)
  Inode table at 548884-550930 (bg #16 + 24596)
  32768 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 917504-950271
  Free inodes: 917057-949808
Group 29: (Blocks 950272-983039) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
  Checksum 0x0dc4, unused inodes 32752
  Block bitmap at 524301 (bg #16 + 13), Inode bitmap at 524317 (bg #16 + 29)
  Inode table at 550931-552977 (bg #16 + 26643)
  32768 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 950272-983039
  Free inodes: 949809-982560
Group 30: (Blocks 983040-1015807) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
  Checksum 0xdf87, unused inodes 32752
  Block bitmap at 524302 (bg #16 + 14), Inode bitmap at 524318 (bg #16 + 30)
  Inode table at 552978-555024 (bg #16 + 28690)
  32768 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 983040-1015807
  Free inodes: 982561-1015312
Group 31: (Blocks 1015808-1048575) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
  Checksum 0xc085, unused inodes 32752
  Block bitmap at 524303 (bg #16 + 15), Inode bitmap at 524319 (bg #16 + 31)
  Inode table at 555025-557071 (bg #16 + 30737)
  32768 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 1015808-1048575
  Free inodes: 1015313-1048064
Group 32: (Blocks 1048576-1080687) [INODE_UNINIT, ITABLE_ZEROED]
  Checksum 0x4600, unused inodes 32752
  Block bitmap at 1048576 (+0), Inode bitmap at 1048592 (+16)
  Inode table at 1048608-1050654 (+32)
  30063 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
  Free blocks: 1048577-1048591, 1048593-1048607, 1050655-1080687
  Free inodes: 1048065-1080816

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: kernel bug at fs/ext4/resize.c:409
  2014-02-06 21:08   ` Jon Bernard
@ 2014-02-13 13:24     ` Dmitry Monakhov
  2014-02-13 14:53       ` Jon Bernard
  0 siblings, 1 reply; 13+ messages in thread
From: Dmitry Monakhov @ 2014-02-13 13:24 UTC (permalink / raw)
  To: Jon Bernard, linux-ext4; +Cc: Theodore Ts'o

On Thu, 6 Feb 2014 16:08:44 -0500, Jon Bernard <jbernard@tuxion.com> wrote:
Non-text part: multipart/mixed
> * Theodore Ts'o <tytso@mit.edu> wrote:
> > On Mon, Feb 03, 2014 at 01:26:34PM -0500, Jon Bernard wrote:
> > > Hello all,
> > > 
> > > A coworker is seeing the following bug when attempting to resize a root
> > > volume (during init by calling resizefs) from 1GB to the size of the
> > > underlying partition size of 20GB.
> > > 
> > > If the partition size is changed (to i.e. 10GB), the bug seems to not
> > > trigger.  I have access to this machine, if there are any experiments
> > > that would provide more useful information - please let me know.
> > 
> > Here are three questions to start:
> > 
> > 1)  What kernel version was this oops coming from?
> 
> 3.12.9-301.fc20.x86_64
> 
> > 2)  Could you please send me the output of dumpe2fs of the file system?
> 
> Dump attached.
> 
> > 3)  Can you reproduce the problem?
> 
> It happens every time with this particular filesystem image.  A new
> image built with slightly different variables (size, contents, etc)
> usually yields a filesystem that behaves correctly.  But once they have
> a bad one, it breaks on resize every time.
> 
> Let me know if I can provide any other information.  I have access to
> the machine for some time, so I can run a modified kernel or module and
> post results if that would help.
> 
> Thanks,
> 
Yepp..  same BUGON was recently triggered by one of our customers
on ovzkernel kernel which is based on RHEL6's (2.6.32) kernel:
Resize the image /vz/private/2345.private_temporary-XXXX/root.hdd to
536870912K
resize2fs 1.42.3 (14-May-2012)
Filesystem at /dev/ploop15153p1 is mounted on
/vz/private/2345.private_temporary-XXXX/root.hdd/root.hds.mnt; on-line
resizing required
old_desc_blocks = 1, new_desc_blocks = 32
<4>[ 1043.647040] ------------[ cut here ]------------
<2>[ 1043.647067] kernel BUG at fs/ext4/resize.c:375!

But after that image was destroyed, and we can not reproduce this bug at
the moment.

Can you please share image created via e2image and blkimage size:
#e2image -r /dev/$YOUR_DEV - | bzip2 > img.e2i.bz2
#blockdev --getsz /dev/$YOUR_DEV
and resize2fs arguments.
> -- 
> Jon
> Filesystem volume name:   cloudimg-rootfs
> Last mounted on:          /
> Filesystem UUID:          942e3be7-4655-4250-8de4-7e6ffb5595e2
> Filesystem magic number:  0xEF53
> Filesystem revision #:    1 (dynamic)
> Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
> Filesystem flags:         signed_directory_hash 
> Default mount options:    user_xattr acl
> Filesystem state:         clean
> Errors behavior:          Continue
> Filesystem OS type:       Linux
> Inode count:              1080816
> Block count:              1080688
> Reserved block count:     54034
> Free blocks:              338604
> Free inodes:              948273
> First block:              0
> Block size:               4096
> Fragment size:            4096
> Reserved GDT blocks:      1024
> Blocks per group:         32768
> Fragments per group:      32768
> Inodes per group:         32752
> Inode blocks per group:   2047
> Flex block group size:    16
> Filesystem created:       Mon Feb  3 15:25:08 2014
> Last mount time:          Thu Feb  6 18:41:42 2014
> Last write time:          Thu Feb  6 18:41:41 2014
> Mount count:              18
> Maximum mount count:      -1
> Last checked:             Mon Feb  3 15:25:08 2014
> Check interval:           0 (<none>)
> Lifetime writes:          69 MB
> Reserved blocks uid:      0 (user root)
> Reserved blocks gid:      0 (group root)
> First inode:              11
> Inode size:	          256
> Required extra isize:     28
> Desired extra isize:      28
> Journal inode:            8
> First orphan inode:       19343
> Default directory hash:   half_md4
> Directory Hash Seed:      8f043eea-0268-4208-9167-34286deca46f
> Journal backup:           inode blocks
> Journal features:         journal_incompat_revoke
> Journal size:             64M
> Journal length:           16384
> Journal sequence:         0x00000030
> Journal start:            1
> 
> 
> Group 0: (Blocks 0-32767) [ITABLE_ZEROED]
>   Checksum 0xf064, unused inodes 0
>   Primary superblock at 0, Group descriptors at 1-1
>   Reserved GDT blocks at 2-1025
>   Block bitmap at 1026 (+1026), Inode bitmap at 1042 (+1042)
>   Inode table at 1058-3104 (+1058)
>   0 free blocks, 0 free inodes, 5537 directories
>   Free blocks: 
>   Free inodes: 
> Group 1: (Blocks 32768-65535) [ITABLE_ZEROED]
>   Checksum 0xfcf6, unused inodes 12736
>   Backup superblock at 32768, Group descriptors at 32769-32769
>   Reserved GDT blocks at 32770-33793
>   Block bitmap at 1027 (bg #0 + 1027), Inode bitmap at 1043 (bg #0 + 1043)
>   Inode table at 3105-5151 (bg #0 + 3105)
>   0 free blocks, 12736 free inodes, 565 directories, 12736 unused inodes
>   Free blocks: 
>   Free inodes: 52769-65504
> Group 2: (Blocks 65536-98303) [ITABLE_ZEROED]
>   Checksum 0x2191, unused inodes 30532
>   Block bitmap at 1028 (bg #0 + 1028), Inode bitmap at 1044 (bg #0 + 1044)
>   Inode table at 5152-7198 (bg #0 + 5152)
>   0 free blocks, 30532 free inodes, 2220 directories, 30532 unused inodes
>   Free blocks: 
>   Free inodes: 67725-98256
> Group 3: (Blocks 98304-131071) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0xec2b, unused inodes 32752
>   Backup superblock at 98304, Group descriptors at 98305-98305
>   Reserved GDT blocks at 98306-99329
>   Block bitmap at 1029 (bg #0 + 1029), Inode bitmap at 1045 (bg #0 + 1045)
>   Inode table at 7199-9245 (bg #0 + 7199)
>   0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 
>   Free inodes: 98257-131008
> Group 4: (Blocks 131072-163839) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0xf126, unused inodes 32752
>   Block bitmap at 1030 (bg #0 + 1030), Inode bitmap at 1046 (bg #0 + 1046)
>   Inode table at 9246-11292 (bg #0 + 9246)
>   132 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 157084-157087, 157184-157311
>   Free inodes: 131009-163760
> Group 5: (Blocks 163840-196607) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0x374c, unused inodes 32752
>   Backup superblock at 163840, Group descriptors at 163841-163841
>   Reserved GDT blocks at 163842-164865
>   Block bitmap at 1031 (bg #0 + 1031), Inode bitmap at 1047 (bg #0 + 1047)
>   Inode table at 11293-13339 (bg #0 + 11293)
>   0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 
>   Free inodes: 163761-196512
> Group 6: (Blocks 196608-229375) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0xa529, unused inodes 32752
>   Block bitmap at 1032 (bg #0 + 1032), Inode bitmap at 1048 (bg #0 + 1048)
>   Inode table at 13340-15386 (bg #0 + 13340)
>   0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 
>   Free inodes: 196513-229264
> Group 7: (Blocks 229376-262143) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0x496e, unused inodes 32752
>   Backup superblock at 229376, Group descriptors at 229377-229377
>   Reserved GDT blocks at 229378-230401
>   Block bitmap at 1033 (bg #0 + 1033), Inode bitmap at 1049 (bg #0 + 1049)
>   Inode table at 15387-17433 (bg #0 + 15387)
>   0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 
>   Free inodes: 229265-262016
> Group 8: (Blocks 262144-294911) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0xa147, unused inodes 32752
>   Block bitmap at 1034 (bg #0 + 1034), Inode bitmap at 1050 (bg #0 + 1050)
>   Inode table at 17434-19480 (bg #0 + 17434)
>   0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 
>   Free inodes: 262017-294768
> Group 9: (Blocks 294912-327679) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0x06e4, unused inodes 32752
>   Backup superblock at 294912, Group descriptors at 294913-294913
>   Reserved GDT blocks at 294914-295937
>   Block bitmap at 1035 (bg #0 + 1035), Inode bitmap at 1051 (bg #0 + 1051)
>   Inode table at 19481-21527 (bg #0 + 19481)
>   2 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 305491-305492
>   Free inodes: 294769-327520
> Group 10: (Blocks 327680-360447) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0x53e4, unused inodes 32752
>   Block bitmap at 1036 (bg #0 + 1036), Inode bitmap at 1052 (bg #0 + 1052)
>   Inode table at 21528-23574 (bg #0 + 21528)
>   0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 
>   Free inodes: 327521-360272
> Group 11: (Blocks 360448-393215) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0x2764, unused inodes 32752
>   Block bitmap at 1037 (bg #0 + 1037), Inode bitmap at 1053 (bg #0 + 1053)
>   Inode table at 23575-25621 (bg #0 + 23575)
>   52 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 385319, 385339, 385473, 386011, 386047, 386055, 386065, 386103, 386121, 386141, 386471, 386603, 386615, 386639, 386647, 386679, 386705, 386723, 386731, 386761, 386785, 386797, 386993, 387001, 387023, 387321, 388387, 388555, 388683, 388717, 388743, 389071, 389107, 389155, 389173, 389195, 389295, 389735, 390187, 390223, 390783, 390945, 390987, 391015, 391127, 392079, 392581, 392593, 392611, 392903, 393027, 393087
>   Free inodes: 360273-393024
> Group 12: (Blocks 393216-425983) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0xb54a, unused inodes 32752
>   Block bitmap at 1038 (bg #0 + 1038), Inode bitmap at 1054 (bg #0 + 1054)
>   Inode table at 25622-27668 (bg #0 + 25622)
>   10189 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 397607, 397623, 397635, 397743, 398211, 398543, 398591, 398717, 398759, 399111, 399263, 399343, 400071, 400143, 400203, 400211, 400219, 400241, 400319, 402071, 402379, 402431, 402731, 402763, 402771, 402839, 402859, 402867, 403057, 403079, 403091, 403147, 403433-403455, 415819-416255, 416271, 416273, 416277-416280, 416285-416287, 416291-416313, 416319-425983
>   Free inodes: 393025-425776
> Group 13: (Blocks 425984-458751) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
>   Checksum 0xedd5, unused inodes 32752
>   Block bitmap at 1039 (bg #0 + 1039), Inode bitmap at 1055 (bg #0 + 1055)
>   Inode table at 27669-29715 (bg #0 + 27669)
>   32768 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 425984-458751
>   Free inodes: 425777-458528
> Group 14: (Blocks 458752-491519) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
>   Checksum 0x3f13, unused inodes 32752
>   Block bitmap at 1040 (bg #0 + 1040), Inode bitmap at 1056 (bg #0 + 1056)
>   Inode table at 29716-31762 (bg #0 + 29716)
>   32768 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 458752-491519
>   Free inodes: 458529-491280
> Group 15: (Blocks 491520-524287) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0x5e51, unused inodes 32752
>   Block bitmap at 1041 (bg #0 + 1041), Inode bitmap at 1057 (bg #0 + 1057)
>   Inode table at 33794-35840 (bg #1 + 1026)
>   16384 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 507904-524287
>   Free inodes: 491281-524032
> Group 16: (Blocks 524288-557055) [ITABLE_ZEROED]
>   Checksum 0x7c5f, unused inodes 0
>   Block bitmap at 524288 (+0), Inode bitmap at 524304 (+16)
>   Inode table at 524320-526366 (+32)
>   0 free blocks, 0 free inodes, 2275 directories
>   Free blocks: 
>   Free inodes: 
> Group 17: (Blocks 557056-589823) [ITABLE_ZEROED]
>   Checksum 0x1b57, unused inodes 0
>   Block bitmap at 524289 (bg #16 + 1), Inode bitmap at 524305 (bg #16 + 17)
>   Inode table at 526367-528413 (bg #16 + 2079)
>   0 free blocks, 0 free inodes, 3615 directories
>   Free blocks: 
>   Free inodes: 
> Group 18: (Blocks 589824-622591) [ITABLE_ZEROED]
>   Checksum 0x2677, unused inodes 22195
>   Block bitmap at 524290 (bg #16 + 2), Inode bitmap at 524306 (bg #16 + 18)
>   Inode table at 528414-530460 (bg #16 + 4126)
>   0 free blocks, 22196 free inodes, 826 directories, 22195 unused inodes
>   Free blocks: 
>   Free inodes: 600087, 600094-622288
> Group 19: (Blocks 622592-655359) [ITABLE_ZEROED]
>   Checksum 0x89be, unused inodes 31008
>   Block bitmap at 524291 (bg #16 + 3), Inode bitmap at 524307 (bg #16 + 19)
>   Inode table at 530461-532507 (bg #16 + 6173)
>   0 free blocks, 31008 free inodes, 1744 directories, 31008 unused inodes
>   Free blocks: 
>   Free inodes: 624033-655040
> Group 20: (Blocks 655360-688127) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0x3b1a, unused inodes 32752
>   Block bitmap at 524292 (bg #16 + 4), Inode bitmap at 524308 (bg #16 + 20)
>   Inode table at 532508-534554 (bg #16 + 8220)
>   0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 
>   Free inodes: 655041-687792
> Group 21: (Blocks 688128-720895) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0xd75d, unused inodes 32752
>   Block bitmap at 524293 (bg #16 + 5), Inode bitmap at 524309 (bg #16 + 21)
>   Inode table at 534555-536601 (bg #16 + 10267)
>   0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 
>   Free inodes: 687793-720544
> Group 22: (Blocks 720896-753663) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0x051e, unused inodes 32752
>   Block bitmap at 524294 (bg #16 + 6), Inode bitmap at 524310 (bg #16 + 22)
>   Inode table at 536602-538648 (bg #16 + 12314)
>   0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 
>   Free inodes: 720545-753296
> Group 23: (Blocks 753664-786431) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0x1a1c, unused inodes 32752
>   Block bitmap at 524295 (bg #16 + 7), Inode bitmap at 524311 (bg #16 + 23)
>   Inode table at 538649-540695 (bg #16 + 14361)
>   0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 
>   Free inodes: 753297-786048
> Group 24: (Blocks 786432-819199) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0xb213, unused inodes 32752
>   Block bitmap at 524296 (bg #16 + 8), Inode bitmap at 524312 (bg #16 + 24)
>   Inode table at 540696-542742 (bg #16 + 16408)
>   0 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 
>   Free inodes: 786049-818800
> Group 25: (Blocks 819200-851967) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0x79b0, unused inodes 32752
>   Backup superblock at 819200, Group descriptors at 819201-819201
>   Reserved GDT blocks at 819202-820225
>   Block bitmap at 524297 (bg #16 + 9), Inode bitmap at 524313 (bg #16 + 25)
>   Inode table at 542743-544789 (bg #16 + 18455)
>   20736 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 827392-835583, 839075-839091, 839140-839536, 839809-840191, 840193-840196, 840198, 840201, 840207, 840214-840217, 840219-840221, 840223-840231, 840237, 840239, 840246-851967
>   Free inodes: 818801-851552
> Group 26: (Blocks 851968-884735) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
>   Checksum 0x0506, unused inodes 32752
>   Block bitmap at 524298 (bg #16 + 10), Inode bitmap at 524314 (bg #16 + 26)
>   Inode table at 544790-546836 (bg #16 + 20502)
>   32768 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 851968-884735
>   Free inodes: 851553-884304
> Group 27: (Blocks 884736-917503) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0x049f, unused inodes 32752
>   Backup superblock at 884736, Group descriptors at 884737-884737
>   Reserved GDT blocks at 884738-885761
>   Block bitmap at 524299 (bg #16 + 11), Inode bitmap at 524315 (bg #16 + 27)
>   Inode table at 546837-548883 (bg #16 + 22549)
>   31742 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 885762-917503
>   Free inodes: 884305-917056
> Group 28: (Blocks 917504-950271) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
>   Checksum 0xe183, unused inodes 32752
>   Block bitmap at 524300 (bg #16 + 12), Inode bitmap at 524316 (bg #16 + 28)
>   Inode table at 548884-550930 (bg #16 + 24596)
>   32768 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 917504-950271
>   Free inodes: 917057-949808
> Group 29: (Blocks 950272-983039) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
>   Checksum 0x0dc4, unused inodes 32752
>   Block bitmap at 524301 (bg #16 + 13), Inode bitmap at 524317 (bg #16 + 29)
>   Inode table at 550931-552977 (bg #16 + 26643)
>   32768 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 950272-983039
>   Free inodes: 949809-982560
> Group 30: (Blocks 983040-1015807) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
>   Checksum 0xdf87, unused inodes 32752
>   Block bitmap at 524302 (bg #16 + 14), Inode bitmap at 524318 (bg #16 + 30)
>   Inode table at 552978-555024 (bg #16 + 28690)
>   32768 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 983040-1015807
>   Free inodes: 982561-1015312
> Group 31: (Blocks 1015808-1048575) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
>   Checksum 0xc085, unused inodes 32752
>   Block bitmap at 524303 (bg #16 + 15), Inode bitmap at 524319 (bg #16 + 31)
>   Inode table at 555025-557071 (bg #16 + 30737)
>   32768 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 1015808-1048575
>   Free inodes: 1015313-1048064
> Group 32: (Blocks 1048576-1080687) [INODE_UNINIT, ITABLE_ZEROED]
>   Checksum 0x4600, unused inodes 32752
>   Block bitmap at 1048576 (+0), Inode bitmap at 1048592 (+16)
>   Inode table at 1048608-1050654 (+32)
>   30063 free blocks, 32752 free inodes, 0 directories, 32752 unused inodes
>   Free blocks: 1048577-1048591, 1048593-1048607, 1050655-1080687
>   Free inodes: 1048065-1080816

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: kernel bug at fs/ext4/resize.c:409
  2014-02-13 13:24     ` Dmitry Monakhov
@ 2014-02-13 14:53       ` Jon Bernard
  2014-02-13 21:18         ` Theodore Ts'o
  0 siblings, 1 reply; 13+ messages in thread
From: Jon Bernard @ 2014-02-13 14:53 UTC (permalink / raw)
  To: Dmitry Monakhov; +Cc: linux-ext4, Theodore Ts'o

* Dmitry Monakhov <dmonakhov@openvz.org> wrote:
> On Thu, 6 Feb 2014 16:08:44 -0500, Jon Bernard <jbernard@tuxion.com> wrote:
> Non-text part: multipart/mixed
> > * Theodore Ts'o <tytso@mit.edu> wrote:
> > > On Mon, Feb 03, 2014 at 01:26:34PM -0500, Jon Bernard wrote:
> > > > Hello all,
> > > > 
> > > > A coworker is seeing the following bug when attempting to resize a root
> > > > volume (during init by calling resizefs) from 1GB to the size of the
> > > > underlying partition size of 20GB.
> > > > 
> > > > If the partition size is changed (to i.e. 10GB), the bug seems to not
> > > > trigger.  I have access to this machine, if there are any experiments
> > > > that would provide more useful information - please let me know.
> > > 
> > > Here are three questions to start:
> > > 
> > > 1)  What kernel version was this oops coming from?
> > 
> > 3.12.9-301.fc20.x86_64
> > 
> > > 2)  Could you please send me the output of dumpe2fs of the file system?
> > 
> > Dump attached.
> > 
> > > 3)  Can you reproduce the problem?
> > 
> > It happens every time with this particular filesystem image.  A new
> > image built with slightly different variables (size, contents, etc)
> > usually yields a filesystem that behaves correctly.  But once they have
> > a bad one, it breaks on resize every time.
> > 
> > Let me know if I can provide any other information.  I have access to
> > the machine for some time, so I can run a modified kernel or module and
> > post results if that would help.
> > 
> > Thanks,
> > 
> Yepp..  same BUGON was recently triggered by one of our customers
> on ovzkernel kernel which is based on RHEL6's (2.6.32) kernel:
> Resize the image /vz/private/2345.private_temporary-XXXX/root.hdd to
> 536870912K
> resize2fs 1.42.3 (14-May-2012)
> Filesystem at /dev/ploop15153p1 is mounted on
> /vz/private/2345.private_temporary-XXXX/root.hdd/root.hds.mnt; on-line
> resizing required
> old_desc_blocks = 1, new_desc_blocks = 32
> <4>[ 1043.647040] ------------[ cut here ]------------
> <2>[ 1043.647067] kernel BUG at fs/ext4/resize.c:375!
> 
> But after that image was destroyed, and we can not reproduce this bug at
> the moment.
> 
> Can you please share image created via e2image and blkimage size:
> #e2image -r /dev/$YOUR_DEV - | bzip2 > img.e2i.bz2
> #blockdev --getsz /dev/$YOUR_DEV
> and resize2fs arguments.

The image should be available here:

http://c5a6e06e970802d5126f-8c6b900f6923cc24b844c506080778ec.r72.cf1.rackcdn.com/fedora_resize_fails.qcow2

The md5sum is: 267fd37e3a5e1c4d50bd133dd2835c98

-- 
Jon

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: kernel bug at fs/ext4/resize.c:409
  2014-02-13 14:53       ` Jon Bernard
@ 2014-02-13 21:18         ` Theodore Ts'o
  2014-02-13 21:27           ` Theodore Ts'o
                             ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Theodore Ts'o @ 2014-02-13 21:18 UTC (permalink / raw)
  To: Dmitry Monakhov, linux-ext4

On Thu, Feb 13, 2014 at 09:53:23AM -0500, Jon Bernard wrote:
> The image should be available here:
> 
> http://c5a6e06e970802d5126f-8c6b900f6923cc24b844c506080778ec.r72.cf1.rackcdn.com/fedora_resize_fails.qcow2

Thanks for the image.  I've been able to reproduce the problem, and
it's caused by the fact that the inode table is so large that it's
overflowing into a subsequent block group, and the resize code isn't
handling this.  Fixing this may be a bit tricky, since the flex_bg
online resize code is a big ugly at the moment, and needs some clean
up so this can be fixed properly.

Until that can be done --- one question: was there a deliberate reason
why the file system was created with parameters which allocate 32,752
inodes per block group?  That means that a bit over 8 megabytes of
inode table are being reserved for every 128 megabyte (32768 4k
blocks) block group, and that you have more inodes reserved than could
be used if the average file size is 4k or less.  In fact, the only way
you could run out of inodes is if you had huge numbers of devices,
sockets, small symlinks, or zero-length files in your file system.
This seems to be a bit of a waste of space, in all liklihood.

Don't get me wrong; we should be able to handle this case correctly,
and not trigger a BUG_ON, but this is why most people aren't seeing
this particular fault --- it requires a far greater number of inodes
than mke2fs would ever create by default, or that most system
administrators would try to deliberately specify, when creating the
file system.

I'll look and see what's the best way to fix up fs/ext4/resize.c in
the kernel.

Regards,

						- Ted

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: kernel bug at fs/ext4/resize.c:409
  2014-02-13 21:18         ` Theodore Ts'o
@ 2014-02-13 21:27           ` Theodore Ts'o
  2014-02-14  3:13           ` Andreas Dilger
  2014-02-14 20:19           ` Jon Bernard
  2 siblings, 0 replies; 13+ messages in thread
From: Theodore Ts'o @ 2014-02-13 21:27 UTC (permalink / raw)
  To: Jon Bernard, Dmitry Monakhov, linux-ext4

+Jon Bernard (who somehow got dropped from the e-mail thread when I tried to reply)

On Thu, Feb 13, 2014 at 04:18:31PM -0500, Theodore Ts'o wrote:
> On Thu, Feb 13, 2014 at 09:53:23AM -0500, Jon Bernard wrote:
> > The image should be available here:
> > 
> > http://c5a6e06e970802d5126f-8c6b900f6923cc24b844c506080778ec.r72.cf1.rackcdn.com/fedora_resize_fails.qcow2
> 
> Thanks for the image.  I've been able to reproduce the problem, and
> it's caused by the fact that the inode table is so large that it's
> overflowing into a subsequent block group, and the resize code isn't
> handling this.  Fixing this may be a bit tricky, since the flex_bg
> online resize code is a big ugly at the moment, and needs some clean
> up so this can be fixed properly.
> 
> Until that can be done --- one question: was there a deliberate reason
> why the file system was created with parameters which allocate 32,752
> inodes per block group?  That means that a bit over 8 megabytes of
> inode table are being reserved for every 128 megabyte (32768 4k
> blocks) block group, and that you have more inodes reserved than could
> be used if the average file size is 4k or less.  In fact, the only way
> you could run out of inodes is if you had huge numbers of devices,
> sockets, small symlinks, or zero-length files in your file system.
> This seems to be a bit of a waste of space, in all liklihood.
> 
> Don't get me wrong; we should be able to handle this case correctly,
> and not trigger a BUG_ON, but this is why most people aren't seeing
> this particular fault --- it requires a far greater number of inodes
> than mke2fs would ever create by default, or that most system
> administrators would try to deliberately specify, when creating the
> file system.
> 
> I'll look and see what's the best way to fix up fs/ext4/resize.c in
> the kernel.
> 
> Regards,
> 
> 						- Ted

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: kernel bug at fs/ext4/resize.c:409
  2014-02-13 21:18         ` Theodore Ts'o
  2014-02-13 21:27           ` Theodore Ts'o
@ 2014-02-14  3:13           ` Andreas Dilger
  2014-02-14 20:19           ` Jon Bernard
  2 siblings, 0 replies; 13+ messages in thread
From: Andreas Dilger @ 2014-02-14  3:13 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: Dmitry Monakhov, linux-ext4

On Feb 13, 2014, at 14:18, Theodore Ts'o <tytso@mit.edu> wrote:
> one question: was there a deliberate reason
> why the file system was created with parameters which allocate 32,752
> inodes per block group?  That means that a bit over 8 megabytes of
> inode table are being reserved for every 128 megabyte (32768 4k
> blocks) block group, and that you have more inodes reserved than could
> be used if the average file size is 4k or less.

This is fairly normal for Lustre metadata targets, where the
inodes typically do not have any data blocks. We typically format the
MDT filesystem with an average of 2KB per inode. 

>  In fact, the only way
> you could run out of inodes is if you had huge numbers of devices,
> sockets, small symlinks, or zero-length files in your file system.
> This seems to be a bit of a waste of space, in all liklihood.
> 
> Don't get me wrong; we should be able to handle this case correctly,
> and not trigger a BUG_ON, but this is why most people aren't seeing
> this particular fault --- it requires a far greater number of inodes
> than mke2fs would ever create by default, or that most system
> administrators would try to deliberately specify, when creating the
> file system.

That said, Lustre also doesn't allow online resize yet (no way to pass
the ioctl() from userspace to the underlying ext4 filesystem), so
we haven't hit this yet. 

Cheers, Andreas

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: kernel bug at fs/ext4/resize.c:409
  2014-02-13 21:18         ` Theodore Ts'o
  2014-02-13 21:27           ` Theodore Ts'o
  2014-02-14  3:13           ` Andreas Dilger
@ 2014-02-14 20:19           ` Jon Bernard
  2014-02-14 23:46             ` Theodore Ts'o
  2 siblings, 1 reply; 13+ messages in thread
From: Jon Bernard @ 2014-02-14 20:19 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: Dmitry Monakhov, linux-ext4

* Theodore Ts'o <tytso@mit.edu> wrote:
> On Thu, Feb 13, 2014 at 09:53:23AM -0500, Jon Bernard wrote:
> > The image should be available here:
> > 
> > http://c5a6e06e970802d5126f-8c6b900f6923cc24b844c506080778ec.r72.cf1.rackcdn.com/fedora_resize_fails.qcow2
> 
> Thanks for the image.  I've been able to reproduce the problem, and
> it's caused by the fact that the inode table is so large that it's
> overflowing into a subsequent block group, and the resize code isn't
> handling this.  Fixing this may be a bit tricky, since the flex_bg
> online resize code is a big ugly at the moment, and needs some clean
> up so this can be fixed properly.
> 
> Until that can be done --- one question: was there a deliberate reason
> why the file system was created with parameters which allocate 32,752
> inodes per block group?  That means that a bit over 8 megabytes of
> inode table are being reserved for every 128 megabyte (32768 4k
> blocks) block group, and that you have more inodes reserved than could
> be used if the average file size is 4k or less.  In fact, the only way
> you could run out of inodes is if you had huge numbers of devices,
> sockets, small symlinks, or zero-length files in your file system.
> This seems to be a bit of a waste of space, in all liklihood.

Ahh, I see.  Here's where this comes from: the particular usecase is
provisioning of new cloud instances whose root volume is of unknown
size.  The filesystem and its contents are created and bundled
before-hand into the smallest filesystem possible.  The instance is PXE
booted for provisioning and the root filesystem is then copied onto the
disk - and then resized to take advantage of the total amount of space.

In order to support very large partitions, the filesystem is created
with an abnormally large inode table so that large resizes would be
possible.  I traced it to this commit as best I can tell:

    https://github.com/openstack/diskimage-builder/commit/fb246a02eb2ed330d3cc37f5795b3ed026aabe07

I assumed that additional inodes would be allocated along with block
groups during an online resize, but that commit contradicts my current
understanding. 

I suggested that the filesystem be created during the time of
provisioning to allow a more optimal on-disk layout, and I believe this
is being considered now.

> Don't get me wrong; we should be able to handle this case correctly,
> and not trigger a BUG_ON, but this is why most people aren't seeing
> this particular fault --- it requires a far greater number of inodes
> than mke2fs would ever create by default, or that most system
> administrators would try to deliberately specify, when creating the
> file system.

Thank you for taking the time to look into this, it is very much
appreciated.

> I'll look and see what's the best way to fix up fs/ext4/resize.c in
> the kernel.

If it turns out to be not terribly complicated and there is not an
immediate time constraint, I would love to try to help with this or at
least test patches.

Cheers,

-- 
Jon

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: kernel bug at fs/ext4/resize.c:409
  2014-02-14 20:19           ` Jon Bernard
@ 2014-02-14 23:46             ` Theodore Ts'o
  2014-02-15  3:16               ` Darrick J. Wong
  0 siblings, 1 reply; 13+ messages in thread
From: Theodore Ts'o @ 2014-02-14 23:46 UTC (permalink / raw)
  To: Jon Bernard; +Cc: Dmitry Monakhov, linux-ext4

On Fri, Feb 14, 2014 at 03:19:05PM -0500, Jon Bernard wrote:
> Ahh, I see.  Here's where this comes from: the particular usecase is
> provisioning of new cloud instances whose root volume is of unknown
> size.  The filesystem and its contents are created and bundled
> before-hand into the smallest filesystem possible.  The instance is PXE
> booted for provisioning and the root filesystem is then copied onto the
> disk - and then resized to take advantage of the total amount of space.
> 
> In order to support very large partitions, the filesystem is created
> with an abnormally large inode table so that large resizes would be
> possible.  I traced it to this commit as best I can tell:
> 
>     https://github.com/openstack/diskimage-builder/commit/fb246a02eb2ed330d3cc37f5795b3ed026aabe07
> 
> I assumed that additional inodes would be allocated along with block
> groups during an online resize, but that commit contradicts my current
> understanding. 

Additional inodes *are* allocated as the file system is grown.
However thought otherwise was wrong.  What happens is that there is a
fixed number of inodes per block group.  When the file system is
resized, either by growing or shrinking file system, as block groups
are added or removed from the file system, the number of inodes
is also added or removed.

> I suggested that the filesystem be created during the time of
> provisioning to allow a more optimal on-disk layout, and I believe this
> is being considered now.

What causes the most damage in terms of a non-optimal data block
layout, installing the file system on a large file system, and then
shrinking the file system to its minimum size use resize2fs -M.  There
is so some non-optimality that occurs as the file system gets filled
beyond about 90% full, but that it's not nearly so bad as shrinking
the file system --- which you should avoid at all costs.

>From a performance point of view, the only time you should try to do
an off-line resize2fs shrink is if you are shrinking the file system
by a handful of blocks as part of converting a file system in place to
use LVM or LUKS encryption, and you need to make room for some
metadata blocks at the end of the partition.

The other thing thing to note is that if you are using a format such
as qcow2, or something like the device-mapper's thin-provisining
(thinkp) scheme, or if you are willing to deal with sparse files, one
approach is to not resize the file system at all.  You could just use
a tool like zerofree[1] to zero out all of the unused blocks in the
file system, and then use "/bin/cp --sparse==always" to cause all zero
blocks to be treated as sparse blocks on the destination file.

[1] http://git.kernel.org/cgit/fs/ext2/xfstests-bld.git/tree/kvm-xfstests/util/zerofree.c

This is part of how I maintain my root filesystem that I use in a VM
for testing ext4 changes upstream.  After I update to the latest
Debian unstable package updates, install the latest updates from the
xfstests and e2fsprogs git repositories, I then run the following
script which uses the zerofree.c program to compress the qcow2 root
file system image that I use with kvm:

http://git.kernel.org/cgit/fs/ext2/xfstests-bld.git/tree/kvm-xfstests/compress-rootfs


Also, starting with e2fsprogs 1.42.10, there's another way you can
efficiently deploy a large file system image by only copying the
blocks which are in use, by using a command like this:

       e2image -rap src_fs dest_fs

(See also the -c flag as described in e2image's man page if you want
to use this technique to do incremental image-based backups onto a
flash-based backup medium; I was using this for a while to keep two
laptop SSD's root filesystem in sync with one another.)

So there are lots of ways that you can do what you need, all without
playing games with resize2fs.  Perhaps some of them would actually be
better for your use case.


> If it turns out to be not terribly complicated and there is not an
> immediate time constraint, I would love to try to help with this or at
> least test patches.

I will hopefully have a bug fix in the next week or two.  

Cheers,

						- Ted

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: kernel bug at fs/ext4/resize.c:409
  2014-02-14 23:46             ` Theodore Ts'o
@ 2014-02-15  3:16               ` Darrick J. Wong
  2014-02-15 15:34                 ` Theodore Ts'o
  0 siblings, 1 reply; 13+ messages in thread
From: Darrick J. Wong @ 2014-02-15  3:16 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: Jon Bernard, Dmitry Monakhov, linux-ext4

Per Ted's request, I've started editing a document on the ext4 wiki:

https://ext4.wiki.kernel.org/index.php/Ext4_VM_Images 

[comments below too]

On Fri, Feb 14, 2014 at 06:46:31PM -0500, Theodore Ts'o wrote:
> On Fri, Feb 14, 2014 at 03:19:05PM -0500, Jon Bernard wrote:
> > Ahh, I see.  Here's where this comes from: the particular usecase is
> > provisioning of new cloud instances whose root volume is of unknown
> > size.  The filesystem and its contents are created and bundled
> > before-hand into the smallest filesystem possible.  The instance is PXE
> > booted for provisioning and the root filesystem is then copied onto the
> > disk - and then resized to take advantage of the total amount of space.
> > 
> > In order to support very large partitions, the filesystem is created
> > with an abnormally large inode table so that large resizes would be
> > possible.  I traced it to this commit as best I can tell:
> > 
> >     https://github.com/openstack/diskimage-builder/commit/fb246a02eb2ed330d3cc37f5795b3ed026aabe07
> > 
> > I assumed that additional inodes would be allocated along with block
> > groups during an online resize, but that commit contradicts my current
> > understanding. 
> 
> Additional inodes *are* allocated as the file system is grown.
> However thought otherwise was wrong.  What happens is that there is a
> fixed number of inodes per block group.  When the file system is
> resized, either by growing or shrinking file system, as block groups
> are added or removed from the file system, the number of inodes
> is also added or removed.
> 
> > I suggested that the filesystem be created during the time of
> > provisioning to allow a more optimal on-disk layout, and I believe this
> > is being considered now.
> 
> What causes the most damage in terms of a non-optimal data block
> layout, installing the file system on a large file system, and then
> shrinking the file system to its minimum size use resize2fs -M.  There
> is so some non-optimality that occurs as the file system gets filled
> beyond about 90% full, but that it's not nearly so bad as shrinking
> the file system --- which you should avoid at all costs.
> 
> From a performance point of view, the only time you should try to do
> an off-line resize2fs shrink is if you are shrinking the file system
> by a handful of blocks as part of converting a file system in place to
> use LVM or LUKS encryption, and you need to make room for some
> metadata blocks at the end of the partition.
> 
> The other thing thing to note is that if you are using a format such
> as qcow2, or something like the device-mapper's thin-provisining
> (thinkp) scheme, or if you are willing to deal with sparse files, one
> approach is to not resize the file system at all.  You could just use
> a tool like zerofree[1] to zero out all of the unused blocks in the
> file system, and then use "/bin/cp --sparse==always" to cause all zero
> blocks to be treated as sparse blocks on the destination file.
> 
> [1] http://git.kernel.org/cgit/fs/ext2/xfstests-bld.git/tree/kvm-xfstests/util/zerofree.c

I have a zerofree variant that knows how to punch/discard blocks that I'll
throw into contrib/ the next time I send out one of my megapatch sets.

> This is part of how I maintain my root filesystem that I use in a VM
> for testing ext4 changes upstream.  After I update to the latest
> Debian unstable package updates, install the latest updates from the
> xfstests and e2fsprogs git repositories, I then run the following
> script which uses the zerofree.c program to compress the qcow2 root
> file system image that I use with kvm:
> 
> http://git.kernel.org/cgit/fs/ext2/xfstests-bld.git/tree/kvm-xfstests/compress-rootfs
> 
> 
> Also, starting with e2fsprogs 1.42.10, there's another way you can

These three options (-rap) are available in 1.42.9.  Is there a particular
reason not to use it before 1.42.10?

> efficiently deploy a large file system image by only copying the
> blocks which are in use, by using a command like this:
> 
>        e2image -rap src_fs dest_fs
> 
> (See also the -c flag as described in e2image's man page if you want
> to use this technique to do incremental image-based backups onto a
> flash-based backup medium; I was using this for a while to keep two
> laptop SSD's root filesystem in sync with one another.)
> 
> So there are lots of ways that you can do what you need, all without
> playing games with resize2fs.  Perhaps some of them would actually be
> better for your use case.

Calvin Watson noted on Ted's G+ repost that one can use fstrim in newer
versions of QEMU (1.5+?) to punch out unused blocks if the virtual disk is
emulated via virtio-scsi.

--D
> 
> 
> > If it turns out to be not terribly complicated and there is not an
> > immediate time constraint, I would love to try to help with this or at
> > least test patches.
> 
> I will hopefully have a bug fix in the next week or two.  
> 
> Cheers,
> 
> 						- Ted
> --
> To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: kernel bug at fs/ext4/resize.c:409
  2014-02-15  3:16               ` Darrick J. Wong
@ 2014-02-15 15:34                 ` Theodore Ts'o
  0 siblings, 0 replies; 13+ messages in thread
From: Theodore Ts'o @ 2014-02-15 15:34 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: Jon Bernard, Dmitry Monakhov, linux-ext4

On Fri, Feb 14, 2014 at 07:16:24PM -0800, Darrick J. Wong wrote:
> > 
> > Also, starting with e2fsprogs 1.42.10, there's another way you can
> 
> These three options (-rap) are available in 1.42.9.  Is there a particular
> reason not to use it before 1.42.10?

That was a typo, sorry.  I had intended to write 1.42.9, since 1.42.10
isn't out yet.

Checking to be sure, there are some coverity fixes for e2image that
are pending that aren't in 1.42.9 relating to e2image, but none of
them are super serious.  The worst of them is a printf type mismatch
when printing the final progress report.  This doesn't seem to cause a
problem on x86_64 architectures, but since it is a 64 vs 32 bit
mismatch, it could cause problems on 32 bit architectures.  I haven't
checked to see whether it causes a cosmetic failure, or a seg fault on
an i386 system, but if it is a seg fault, e2image would have completed
the file system copy at that point anyway.  In any case, this will be
fixed in the next release, and the fix is in the Debian package
already.


Thanks for adding this information to the ext4 wiki!

       	   	       		      	  - Ted

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] ext4: fix online resize with very large inode tables
  2014-02-03 18:26 kernel bug at fs/ext4/resize.c:409 Jon Bernard
  2014-02-03 18:56 ` Theodore Ts'o
@ 2014-02-16  2:35 ` Theodore Ts'o
  1 sibling, 0 replies; 13+ messages in thread
From: Theodore Ts'o @ 2014-02-16  2:35 UTC (permalink / raw)
  To: Ext4 Developers List; +Cc: Jon Bernard, Theodore Ts'o, stable

If a file system has a large number of inodes per block group, all of
the metadata blocks in a flex_bg may be larger than what can fit in a
single block group.  Unfortunately, ext4_alloc_group_tables() in
resize.c was never tested to see if it would handle this case
correctly, and there were a large number of bugs which caused the
following sequence to result in a BUG_ON:

kernel bug at fs/ext4/resize.c:409!
   ...
call trace:
 [<ffffffff81256768>] ext4_flex_group_add+0x1448/0x1830
 [<ffffffff81257de2>] ext4_resize_fs+0x7b2/0xe80
 [<ffffffff8123ac50>] ext4_ioctl+0xbf0/0xf00
 [<ffffffff811c111d>] do_vfs_ioctl+0x2dd/0x4b0
 [<ffffffff811b9df2>] ? final_putname+0x22/0x50
 [<ffffffff811c1371>] sys_ioctl+0x81/0xa0
 [<ffffffff81676aa9>] system_call_fastpath+0x16/0x1b
code: c8 4c 89 df e8 41 96 f8 ff 44 89 e8 49 01 c4 44 29 6d d4 0
rip  [<ffffffff81254fa1>] set_flexbg_block_bitmap+0x171/0x180


This can be reproduced with the following command sequence:

   mke2fs -t ext4 -i 4096 /dev/vdd 1G
   mount -t ext4 /dev/vdd /vdd
   resize2fs /dev/vdd 8G

To fix this, we need to make sure the right thing happens when a block
group's inode table straddles two block groups, which means the
following bugs had to be fixed:

1) Not clearing the BLOCK_UNINIT flag in the second block group in
   ext4_alloc_group_tables --- the was proximate cause of the BUG_ON.

2) Incorrectly determining how many block groups contained contiguous
   free blocks in ext4_alloc_group_tables().

3) Incorrectly setting the start of the next block range to be marked
   in use after a discontinuity in setup_new_flex_group_blocks().

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: stable@vger.kernel.org
---
 fs/ext4/resize.c | 32 ++++++++++++++++++++------------
 1 file changed, 20 insertions(+), 12 deletions(-)

diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
index c5adbb3..69a6261 100644
--- a/fs/ext4/resize.c
+++ b/fs/ext4/resize.c
@@ -243,6 +243,7 @@ static int ext4_alloc_group_tables(struct super_block *sb,
 	ext4_group_t group;
 	ext4_group_t last_group;
 	unsigned overhead;
+	__u16 uninit_mask = (flexbg_size > 1) ? ~EXT4_BG_BLOCK_UNINIT : ~0;
 
 	BUG_ON(flex_gd->count == 0 || group_data == NULL);
 
@@ -266,7 +267,7 @@ next_group:
 	src_group++;
 	for (; src_group <= last_group; src_group++) {
 		overhead = ext4_group_overhead_blocks(sb, src_group);
-		if (overhead != 0)
+		if (overhead == 0)
 			last_blk += group_data[src_group - group].blocks_count;
 		else
 			break;
@@ -280,8 +281,7 @@ next_group:
 		group = ext4_get_group_number(sb, start_blk - 1);
 		group -= group_data[0].group;
 		group_data[group].free_blocks_count--;
-		if (flexbg_size > 1)
-			flex_gd->bg_flags[group] &= ~EXT4_BG_BLOCK_UNINIT;
+		flex_gd->bg_flags[group] &= uninit_mask;
 	}
 
 	/* Allocate inode bitmaps */
@@ -292,22 +292,30 @@ next_group:
 		group = ext4_get_group_number(sb, start_blk - 1);
 		group -= group_data[0].group;
 		group_data[group].free_blocks_count--;
-		if (flexbg_size > 1)
-			flex_gd->bg_flags[group] &= ~EXT4_BG_BLOCK_UNINIT;
+		flex_gd->bg_flags[group] &= uninit_mask;
 	}
 
 	/* Allocate inode tables */
 	for (; it_index < flex_gd->count; it_index++) {
-		if (start_blk + EXT4_SB(sb)->s_itb_per_group > last_blk)
+		unsigned int itb = EXT4_SB(sb)->s_itb_per_group;
+		ext4_fsblk_t next_group_start;
+
+		if (start_blk + itb > last_blk)
 			goto next_group;
 		group_data[it_index].inode_table = start_blk;
-		group = ext4_get_group_number(sb, start_blk - 1);
+		group = ext4_get_group_number(sb, start_blk);
+		next_group_start = ext4_group_first_block_no(sb, group + 1);
 		group -= group_data[0].group;
-		group_data[group].free_blocks_count -=
-					EXT4_SB(sb)->s_itb_per_group;
-		if (flexbg_size > 1)
-			flex_gd->bg_flags[group] &= ~EXT4_BG_BLOCK_UNINIT;
 
+		if (start_blk + itb > next_group_start) {
+			flex_gd->bg_flags[group + 1] &= uninit_mask;
+			overhead = start_blk + itb - next_group_start;
+			group_data[group + 1].free_blocks_count -= overhead;
+			itb -= overhead;
+		}
+
+		group_data[group].free_blocks_count -= itb;
+		flex_gd->bg_flags[group] &= uninit_mask;
 		start_blk += EXT4_SB(sb)->s_itb_per_group;
 	}
 
@@ -620,7 +628,7 @@ handle_ib:
 			if (err)
 				goto out;
 			count = group_table_count[j];
-			start = group_data[i].block_bitmap;
+			start = (&group_data[i].block_bitmap)[j];
 			block = start;
 		}
 
-- 
1.8.5.rc3.362.gdf10213


^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2014-02-16  2:35 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-02-03 18:26 kernel bug at fs/ext4/resize.c:409 Jon Bernard
2014-02-03 18:56 ` Theodore Ts'o
2014-02-06 21:08   ` Jon Bernard
2014-02-13 13:24     ` Dmitry Monakhov
2014-02-13 14:53       ` Jon Bernard
2014-02-13 21:18         ` Theodore Ts'o
2014-02-13 21:27           ` Theodore Ts'o
2014-02-14  3:13           ` Andreas Dilger
2014-02-14 20:19           ` Jon Bernard
2014-02-14 23:46             ` Theodore Ts'o
2014-02-15  3:16               ` Darrick J. Wong
2014-02-15 15:34                 ` Theodore Ts'o
2014-02-16  2:35 ` [PATCH] ext4: fix online resize with very large inode tables Theodore Ts'o

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).