All of lore.kernel.org
 help / color / mirror / Atom feed
* Should Never Happen: Resize Inode Corrupt
@ 2019-03-15  6:38 Burke Harper
  2019-03-15 19:19 ` Andreas Dilger
  0 siblings, 1 reply; 6+ messages in thread
From: Burke Harper @ 2019-03-15  6:38 UTC (permalink / raw)
  To: linux-ext4

Over the past weekend, I added 2 more drives to my /dev/md0 array:

sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sat Dec 16 18:32:08 2017
     Raid Level : raid6
     Array Size : 54697266176 (52163.38 GiB 56010.00 GB)
  Used Dev Size : 7813895168 (7451.91 GiB 8001.43 GB)
   Raid Devices : 9
  Total Devices : 9
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon Mar 11 05:13:12 2019
          State : clean
 Active Devices : 9
Working Devices : 9
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : powerhouse:0  (local to host powerhouse)
           UUID : 19b5c7a5:59e4bd00:b4b1c18c:089df9bd
         Events : 45981

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       1       8       16        1      active sync   /dev/sdb
       2       8       32        2      active sync   /dev/sdc
       3       8       48        3      active sync   /dev/sdd
       5       8      144        4      active sync   /dev/sdj
       4       8      128        5      active sync   /dev/sdi
       6       8      112        6      active sync   /dev/sdh
       8       8       80        7      active sync   /dev/sdf
       7       8       64        8      active sync   /dev/sde

Afterwards I did an fsck:

sudo fsck.ext4 -f /dev/md0
e2fsck 1.42.13 (17-May-2015)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/md0: 70089/1220923392 files (3.6% non-contiguous),
7726498938/9767368960 blocks

Following that, I tried to perform an offline resize:

sudo resize2fs /dev/md0
resize2fs 1.42.13 (17-May-2015)
Resizing the filesystem on /dev/md0 to 13674316544 (4k) blocks.
Should never happen: resize inode corrupt!

After having done that, it looks like it should have been an online
resize from reading a thread on here from 2015.

After trying the resize I tried to do another fsck:

sudo fsck.ext4 -f /dev/md0
e2fsck 1.42.13 (17-May-2015)
ext2fs_check_desc: Corrupt group descriptor: bad block for inode table
fsck.ext4: Group descriptors look bad... trying backup blocks...
Superblock has an invalid journal (inode 8).
Clear<y>? yes
*** ext3 journal has been deleted - filesystem is now ext2 only ***

Resize inode not valid.  Recreate<y>? yes

It's been stuck here for days, with:

14827 root      20   0  141796 121044   2688 R  93.8  0.4   5546:26 fsck.ext4

It's been running at around 100% the whole time.  I don't see any disk
io happening either.

sudo dumpe2fs -h /dev/md0
dumpe2fs 1.42.13 (17-May-2015)
Filesystem volume name:   <none>
Last mounted on:          /Media10
Filesystem UUID:          d36119d5-e3ec-47f7-b93e-124eb4598367
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index
filetype extent 64bit flex_bg sparse_super large_file huge_file
uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash
Default mount options:    user_xattr acl
Filesystem state:         clean with errors
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              1709293568
Block count:              13674316544
Reserved block count:     683715825
Free blocks:              5886063280
Free inodes:              1709223479
First block:              0
Block size:               4096
Fragment size:            4096
Group descriptor size:    64
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         4096
Inode blocks per group:   256
RAID stride:              128
RAID stripe width:        256
Flex block group size:    16
Filesystem created:       Sun Dec 17 10:10:08 2017
Last mount time:          Sat Mar  9 17:58:06 2019
Last write time:          Mon Mar 11 05:48:14 2019
Mount count:              0
Maximum mount count:      -1
Last checked:             Mon Mar 11 05:16:14 2019
Check interval:           0 (<none>)
Lifetime writes:          29 TB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      23fd4260-aee9-4f36-8406-240f3b7a39d2
Journal backup:           inode blocks
Journal superblock magic number invalid!


Should I let the fsck continue, or is it safe to exit to try something else.

I recently did an offline resize a few weeks ago on the same array and
it worked out just fine.  I'm not sure what happened this time, I
followed the same steps.

Thanks for any help.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Should Never Happen: Resize Inode Corrupt
  2019-03-15  6:38 Should Never Happen: Resize Inode Corrupt Burke Harper
@ 2019-03-15 19:19 ` Andreas Dilger
  2019-03-16 17:11   ` Burke Harper
  0 siblings, 1 reply; 6+ messages in thread
From: Andreas Dilger @ 2019-03-15 19:19 UTC (permalink / raw)
  To: Burke Harper; +Cc: linux-ext4

Kill your e2fsck and upgrade to the latest version 1.44.5, as it has a lot of fixes over 1.42.13. 

If you have the ability, make a "dd" copy of the filesystem first, or a snapshot, and run e2fsck on that first. 

Cheers, Andreas

> On Mar 15, 2019, at 00:38, Burke Harper <go88team@gmail.com> wrote:
> 
> Over the past weekend, I added 2 more drives to my /dev/md0 array:
> 
> sudo mdadm --detail /dev/md0
> /dev/md0:
>        Version : 1.2
>  Creation Time : Sat Dec 16 18:32:08 2017
>     Raid Level : raid6
>     Array Size : 54697266176 (52163.38 GiB 56010.00 GB)
>  Used Dev Size : 7813895168 (7451.91 GiB 8001.43 GB)
>   Raid Devices : 9
>  Total Devices : 9
>    Persistence : Superblock is persistent
> 
>  Intent Bitmap : Internal
> 
>    Update Time : Mon Mar 11 05:13:12 2019
>          State : clean
> Active Devices : 9
> Working Devices : 9
> Failed Devices : 0
>  Spare Devices : 0
> 
>         Layout : left-symmetric
>     Chunk Size : 512K
> 
>           Name : powerhouse:0  (local to host powerhouse)
>           UUID : 19b5c7a5:59e4bd00:b4b1c18c:089df9bd
>         Events : 45981
> 
>    Number   Major   Minor   RaidDevice State
>       0       8        0        0      active sync   /dev/sda
>       1       8       16        1      active sync   /dev/sdb
>       2       8       32        2      active sync   /dev/sdc
>       3       8       48        3      active sync   /dev/sdd
>       5       8      144        4      active sync   /dev/sdj
>       4       8      128        5      active sync   /dev/sdi
>       6       8      112        6      active sync   /dev/sdh
>       8       8       80        7      active sync   /dev/sdf
>       7       8       64        8      active sync   /dev/sde
> 
> Afterwards I did an fsck:
> 
> sudo fsck.ext4 -f /dev/md0
> e2fsck 1.42.13 (17-May-2015)
> Pass 1: Checking inodes, blocks, and sizes
> Pass 2: Checking directory structure
> Pass 3: Checking directory connectivity
> Pass 4: Checking reference counts
> Pass 5: Checking group summary information
> /dev/md0: 70089/1220923392 files (3.6% non-contiguous),
> 7726498938/9767368960 blocks
> 
> Following that, I tried to perform an offline resize:
> 
> sudo resize2fs /dev/md0
> resize2fs 1.42.13 (17-May-2015)
> Resizing the filesystem on /dev/md0 to 13674316544 (4k) blocks.
> Should never happen: resize inode corrupt!
> 
> After having done that, it looks like it should have been an online
> resize from reading a thread on here from 2015.
> 
> After trying the resize I tried to do another fsck:
> 
> sudo fsck.ext4 -f /dev/md0
> e2fsck 1.42.13 (17-May-2015)
> ext2fs_check_desc: Corrupt group descriptor: bad block for inode table
> fsck.ext4: Group descriptors look bad... trying backup blocks...
> Superblock has an invalid journal (inode 8).
> Clear<y>? yes
> *** ext3 journal has been deleted - filesystem is now ext2 only ***
> 
> Resize inode not valid.  Recreate<y>? yes
> 
> It's been stuck here for days, with:
> 
> 14827 root      20   0  141796 121044   2688 R  93.8  0.4   5546:26 fsck.ext4
> 
> It's been running at around 100% the whole time.  I don't see any disk
> io happening either.
> 
> sudo dumpe2fs -h /dev/md0
> dumpe2fs 1.42.13 (17-May-2015)
> Filesystem volume name:   <none>
> Last mounted on:          /Media10
> Filesystem UUID:          d36119d5-e3ec-47f7-b93e-124eb4598367
> Filesystem magic number:  0xEF53
> Filesystem revision #:    1 (dynamic)
> Filesystem features:      has_journal ext_attr resize_inode dir_index
> filetype extent 64bit flex_bg sparse_super large_file huge_file
> uninit_bg dir_nlink extra_isize
> Filesystem flags:         signed_directory_hash
> Default mount options:    user_xattr acl
> Filesystem state:         clean with errors
> Errors behavior:          Continue
> Filesystem OS type:       Linux
> Inode count:              1709293568
> Block count:              13674316544
> Reserved block count:     683715825
> Free blocks:              5886063280
> Free inodes:              1709223479
> First block:              0
> Block size:               4096
> Fragment size:            4096
> Group descriptor size:    64
> Blocks per group:         32768
> Fragments per group:      32768
> Inodes per group:         4096
> Inode blocks per group:   256
> RAID stride:              128
> RAID stripe width:        256
> Flex block group size:    16
> Filesystem created:       Sun Dec 17 10:10:08 2017
> Last mount time:          Sat Mar  9 17:58:06 2019
> Last write time:          Mon Mar 11 05:48:14 2019
> Mount count:              0
> Maximum mount count:      -1
> Last checked:             Mon Mar 11 05:16:14 2019
> Check interval:           0 (<none>)
> Lifetime writes:          29 TB
> Reserved blocks uid:      0 (user root)
> Reserved blocks gid:      0 (group root)
> First inode:              11
> Inode size:               256
> Required extra isize:     28
> Desired extra isize:      28
> Journal inode:            8
> Default directory hash:   half_md4
> Directory Hash Seed:      23fd4260-aee9-4f36-8406-240f3b7a39d2
> Journal backup:           inode blocks
> Journal superblock magic number invalid!
> 
> 
> Should I let the fsck continue, or is it safe to exit to try something else.
> 
> I recently did an offline resize a few weeks ago on the same array and
> it worked out just fine.  I'm not sure what happened this time, I
> followed the same steps.
> 
> Thanks for any help.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Should Never Happen: Resize Inode Corrupt
  2019-03-15 19:19 ` Andreas Dilger
@ 2019-03-16 17:11   ` Burke Harper
  2019-03-16 18:57     ` Andreas Dilger
  0 siblings, 1 reply; 6+ messages in thread
From: Burke Harper @ 2019-03-16 17:11 UTC (permalink / raw)
  To: Andreas Dilger; +Cc: linux-ext4

I updated to 1.44.5.

It's been going along now for about 21 hours.  There is some
noticeable io this time around.  Looks like it's still trying to
recreate the resize inode.

On Fri, Mar 15, 2019 at 3:19 PM Andreas Dilger <adilger@dilger.ca> wrote:
>
> Kill your e2fsck and upgrade to the latest version 1.44.5, as it has a lot of fixes over 1.42.13.
>
> If you have the ability, make a "dd" copy of the filesystem first, or a snapshot, and run e2fsck on that first.
>
> Cheers, Andreas
>
> > On Mar 15, 2019, at 00:38, Burke Harper <go88team@gmail.com> wrote:
> >
> > Over the past weekend, I added 2 more drives to my /dev/md0 array:
> >
> > sudo mdadm --detail /dev/md0
> > /dev/md0:
> >        Version : 1.2
> >  Creation Time : Sat Dec 16 18:32:08 2017
> >     Raid Level : raid6
> >     Array Size : 54697266176 (52163.38 GiB 56010.00 GB)
> >  Used Dev Size : 7813895168 (7451.91 GiB 8001.43 GB)
> >   Raid Devices : 9
> >  Total Devices : 9
> >    Persistence : Superblock is persistent
> >
> >  Intent Bitmap : Internal
> >
> >    Update Time : Mon Mar 11 05:13:12 2019
> >          State : clean
> > Active Devices : 9
> > Working Devices : 9
> > Failed Devices : 0
> >  Spare Devices : 0
> >
> >         Layout : left-symmetric
> >     Chunk Size : 512K
> >
> >           Name : powerhouse:0  (local to host powerhouse)
> >           UUID : 19b5c7a5:59e4bd00:b4b1c18c:089df9bd
> >         Events : 45981
> >
> >    Number   Major   Minor   RaidDevice State
> >       0       8        0        0      active sync   /dev/sda
> >       1       8       16        1      active sync   /dev/sdb
> >       2       8       32        2      active sync   /dev/sdc
> >       3       8       48        3      active sync   /dev/sdd
> >       5       8      144        4      active sync   /dev/sdj
> >       4       8      128        5      active sync   /dev/sdi
> >       6       8      112        6      active sync   /dev/sdh
> >       8       8       80        7      active sync   /dev/sdf
> >       7       8       64        8      active sync   /dev/sde
> >
> > Afterwards I did an fsck:
> >
> > sudo fsck.ext4 -f /dev/md0
> > e2fsck 1.42.13 (17-May-2015)
> > Pass 1: Checking inodes, blocks, and sizes
> > Pass 2: Checking directory structure
> > Pass 3: Checking directory connectivity
> > Pass 4: Checking reference counts
> > Pass 5: Checking group summary information
> > /dev/md0: 70089/1220923392 files (3.6% non-contiguous),
> > 7726498938/9767368960 blocks
> >
> > Following that, I tried to perform an offline resize:
> >
> > sudo resize2fs /dev/md0
> > resize2fs 1.42.13 (17-May-2015)
> > Resizing the filesystem on /dev/md0 to 13674316544 (4k) blocks.
> > Should never happen: resize inode corrupt!
> >
> > After having done that, it looks like it should have been an online
> > resize from reading a thread on here from 2015.
> >
> > After trying the resize I tried to do another fsck:
> >
> > sudo fsck.ext4 -f /dev/md0
> > e2fsck 1.42.13 (17-May-2015)
> > ext2fs_check_desc: Corrupt group descriptor: bad block for inode table
> > fsck.ext4: Group descriptors look bad... trying backup blocks...
> > Superblock has an invalid journal (inode 8).
> > Clear<y>? yes
> > *** ext3 journal has been deleted - filesystem is now ext2 only ***
> >
> > Resize inode not valid.  Recreate<y>? yes
> >
> > It's been stuck here for days, with:
> >
> > 14827 root      20   0  141796 121044   2688 R  93.8  0.4   5546:26 fsck.ext4
> >
> > It's been running at around 100% the whole time.  I don't see any disk
> > io happening either.
> >
> > sudo dumpe2fs -h /dev/md0
> > dumpe2fs 1.42.13 (17-May-2015)
> > Filesystem volume name:   <none>
> > Last mounted on:          /Media10
> > Filesystem UUID:          d36119d5-e3ec-47f7-b93e-124eb4598367
> > Filesystem magic number:  0xEF53
> > Filesystem revision #:    1 (dynamic)
> > Filesystem features:      has_journal ext_attr resize_inode dir_index
> > filetype extent 64bit flex_bg sparse_super large_file huge_file
> > uninit_bg dir_nlink extra_isize
> > Filesystem flags:         signed_directory_hash
> > Default mount options:    user_xattr acl
> > Filesystem state:         clean with errors
> > Errors behavior:          Continue
> > Filesystem OS type:       Linux
> > Inode count:              1709293568
> > Block count:              13674316544
> > Reserved block count:     683715825
> > Free blocks:              5886063280
> > Free inodes:              1709223479
> > First block:              0
> > Block size:               4096
> > Fragment size:            4096
> > Group descriptor size:    64
> > Blocks per group:         32768
> > Fragments per group:      32768
> > Inodes per group:         4096
> > Inode blocks per group:   256
> > RAID stride:              128
> > RAID stripe width:        256
> > Flex block group size:    16
> > Filesystem created:       Sun Dec 17 10:10:08 2017
> > Last mount time:          Sat Mar  9 17:58:06 2019
> > Last write time:          Mon Mar 11 05:48:14 2019
> > Mount count:              0
> > Maximum mount count:      -1
> > Last checked:             Mon Mar 11 05:16:14 2019
> > Check interval:           0 (<none>)
> > Lifetime writes:          29 TB
> > Reserved blocks uid:      0 (user root)
> > Reserved blocks gid:      0 (group root)
> > First inode:              11
> > Inode size:               256
> > Required extra isize:     28
> > Desired extra isize:      28
> > Journal inode:            8
> > Default directory hash:   half_md4
> > Directory Hash Seed:      23fd4260-aee9-4f36-8406-240f3b7a39d2
> > Journal backup:           inode blocks
> > Journal superblock magic number invalid!
> >
> >
> > Should I let the fsck continue, or is it safe to exit to try something else.
> >
> > I recently did an offline resize a few weeks ago on the same array and
> > it worked out just fine.  I'm not sure what happened this time, I
> > followed the same steps.
> >
> > Thanks for any help.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Should Never Happen: Resize Inode Corrupt
  2019-03-16 17:11   ` Burke Harper
@ 2019-03-16 18:57     ` Andreas Dilger
  2019-03-17 21:19       ` Theodore Ts'o
  0 siblings, 1 reply; 6+ messages in thread
From: Andreas Dilger @ 2019-03-16 18:57 UTC (permalink / raw)
  To: Burke Harper; +Cc: linux-ext4

You could kill e2fsck and disable the resize_inode feature?  There is a different resize mechanism available now (meta_bg) that doesn't need it. 

Cheers, Andreas

> On Mar 16, 2019, at 11:11, Burke Harper <go88team@gmail.com> wrote:
> 
> I updated to 1.44.5.
> 
> It's been going along now for about 21 hours.  There is some
> noticeable io this time around.  Looks like it's still trying to
> recreate the resize inode.
> 
>> On Fri, Mar 15, 2019 at 3:19 PM Andreas Dilger <adilger@dilger.ca> wrote:
>> 
>> Kill your e2fsck and upgrade to the latest version 1.44.5, as it has a lot of fixes over 1.42.13.
>> 
>> If you have the ability, make a "dd" copy of the filesystem first, or a snapshot, and run e2fsck on that first.
>> 
>> Cheers, Andreas
>> 
>>> On Mar 15, 2019, at 00:38, Burke Harper <go88team@gmail.com> wrote:
>>> 
>>> Over the past weekend, I added 2 more drives to my /dev/md0 array:
>>> 
>>> sudo mdadm --detail /dev/md0
>>> /dev/md0:
>>>       Version : 1.2
>>> Creation Time : Sat Dec 16 18:32:08 2017
>>>    Raid Level : raid6
>>>    Array Size : 54697266176 (52163.38 GiB 56010.00 GB)
>>> Used Dev Size : 7813895168 (7451.91 GiB 8001.43 GB)
>>>  Raid Devices : 9
>>> Total Devices : 9
>>>   Persistence : Superblock is persistent
>>> 
>>> Intent Bitmap : Internal
>>> 
>>>   Update Time : Mon Mar 11 05:13:12 2019
>>>         State : clean
>>> Active Devices : 9
>>> Working Devices : 9
>>> Failed Devices : 0
>>> Spare Devices : 0
>>> 
>>>        Layout : left-symmetric
>>>    Chunk Size : 512K
>>> 
>>>          Name : powerhouse:0  (local to host powerhouse)
>>>          UUID : 19b5c7a5:59e4bd00:b4b1c18c:089df9bd
>>>        Events : 45981
>>> 
>>>   Number   Major   Minor   RaidDevice State
>>>      0       8        0        0      active sync   /dev/sda
>>>      1       8       16        1      active sync   /dev/sdb
>>>      2       8       32        2      active sync   /dev/sdc
>>>      3       8       48        3      active sync   /dev/sdd
>>>      5       8      144        4      active sync   /dev/sdj
>>>      4       8      128        5      active sync   /dev/sdi
>>>      6       8      112        6      active sync   /dev/sdh
>>>      8       8       80        7      active sync   /dev/sdf
>>>      7       8       64        8      active sync   /dev/sde
>>> 
>>> Afterwards I did an fsck:
>>> 
>>> sudo fsck.ext4 -f /dev/md0
>>> e2fsck 1.42.13 (17-May-2015)
>>> Pass 1: Checking inodes, blocks, and sizes
>>> Pass 2: Checking directory structure
>>> Pass 3: Checking directory connectivity
>>> Pass 4: Checking reference counts
>>> Pass 5: Checking group summary information
>>> /dev/md0: 70089/1220923392 files (3.6% non-contiguous),
>>> 7726498938/9767368960 blocks
>>> 
>>> Following that, I tried to perform an offline resize:
>>> 
>>> sudo resize2fs /dev/md0
>>> resize2fs 1.42.13 (17-May-2015)
>>> Resizing the filesystem on /dev/md0 to 13674316544 (4k) blocks.
>>> Should never happen: resize inode corrupt!
>>> 
>>> After having done that, it looks like it should have been an online
>>> resize from reading a thread on here from 2015.
>>> 
>>> After trying the resize I tried to do another fsck:
>>> 
>>> sudo fsck.ext4 -f /dev/md0
>>> e2fsck 1.42.13 (17-May-2015)
>>> ext2fs_check_desc: Corrupt group descriptor: bad block for inode table
>>> fsck.ext4: Group descriptors look bad... trying backup blocks...
>>> Superblock has an invalid journal (inode 8).
>>> Clear<y>? yes
>>> *** ext3 journal has been deleted - filesystem is now ext2 only ***
>>> 
>>> Resize inode not valid.  Recreate<y>? yes
>>> 
>>> It's been stuck here for days, with:
>>> 
>>> 14827 root      20   0  141796 121044   2688 R  93.8  0.4   5546:26 fsck.ext4
>>> 
>>> It's been running at around 100% the whole time.  I don't see any disk
>>> io happening either.
>>> 
>>> sudo dumpe2fs -h /dev/md0
>>> dumpe2fs 1.42.13 (17-May-2015)
>>> Filesystem volume name:   <none>
>>> Last mounted on:          /Media10
>>> Filesystem UUID:          d36119d5-e3ec-47f7-b93e-124eb4598367
>>> Filesystem magic number:  0xEF53
>>> Filesystem revision #:    1 (dynamic)
>>> Filesystem features:      has_journal ext_attr resize_inode dir_index
>>> filetype extent 64bit flex_bg sparse_super large_file huge_file
>>> uninit_bg dir_nlink extra_isize
>>> Filesystem flags:         signed_directory_hash
>>> Default mount options:    user_xattr acl
>>> Filesystem state:         clean with errors
>>> Errors behavior:          Continue
>>> Filesystem OS type:       Linux
>>> Inode count:              1709293568
>>> Block count:              13674316544
>>> Reserved block count:     683715825
>>> Free blocks:              5886063280
>>> Free inodes:              1709223479
>>> First block:              0
>>> Block size:               4096
>>> Fragment size:            4096
>>> Group descriptor size:    64
>>> Blocks per group:         32768
>>> Fragments per group:      32768
>>> Inodes per group:         4096
>>> Inode blocks per group:   256
>>> RAID stride:              128
>>> RAID stripe width:        256
>>> Flex block group size:    16
>>> Filesystem created:       Sun Dec 17 10:10:08 2017
>>> Last mount time:          Sat Mar  9 17:58:06 2019
>>> Last write time:          Mon Mar 11 05:48:14 2019
>>> Mount count:              0
>>> Maximum mount count:      -1
>>> Last checked:             Mon Mar 11 05:16:14 2019
>>> Check interval:           0 (<none>)
>>> Lifetime writes:          29 TB
>>> Reserved blocks uid:      0 (user root)
>>> Reserved blocks gid:      0 (group root)
>>> First inode:              11
>>> Inode size:               256
>>> Required extra isize:     28
>>> Desired extra isize:      28
>>> Journal inode:            8
>>> Default directory hash:   half_md4
>>> Directory Hash Seed:      23fd4260-aee9-4f36-8406-240f3b7a39d2
>>> Journal backup:           inode blocks
>>> Journal superblock magic number invalid!
>>> 
>>> 
>>> Should I let the fsck continue, or is it safe to exit to try something else.
>>> 
>>> I recently did an offline resize a few weeks ago on the same array and
>>> it worked out just fine.  I'm not sure what happened this time, I
>>> followed the same steps.
>>> 
>>> Thanks for any help.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Should Never Happen: Resize Inode Corrupt
  2019-03-16 18:57     ` Andreas Dilger
@ 2019-03-17 21:19       ` Theodore Ts'o
  2019-03-18  5:24         ` Burke Harper
  0 siblings, 1 reply; 6+ messages in thread
From: Theodore Ts'o @ 2019-03-17 21:19 UTC (permalink / raw)
  To: Andreas Dilger; +Cc: Burke Harper, linux-ext4

On Sat, Mar 16, 2019 at 12:57:57PM -0600, Andreas Dilger wrote:
> You could kill e2fsck and disable the resize_inode feature?  There is a different resize mechanism available now (meta_bg) that doesn't need it.

It looks like the file system was previously 36T and you were trying
to resize it to 51T.  Is that right?  The resize_inode feature should
not have been present at all; it's not valid for file systems > 32TiB.

The resize2fs in 1.42 is more than a little bit buggy when dealing
with large file systems > 32TiB, and it sounds like there were some
problems dealing with the transition from file systems smaller than 32
TiB (where the resize_inode still works), and file systems > 32 TiB
(where we use a new style of on-line resizing, called meta_bg.

Hopefully that's because you used an old 1.42 resize2fs when you
resized it up to 36 TiB, but we should test to make sure it's
currently working correctly.

Similarly, e2fsck shouldn't be even trying to deal with the resize
inode if the file system size > 32 TiB.  (Or to be more
accurate/pedantic, when the max. block number no longer fits in a
32-bit integer; although if someone is using a 1k or 2k block file
system on a file system that larger, they have other problems.  :-)

So yeah, the first thing I would use debugfs to clear the resize_inode
feature:

debugfs -w /dev/md0
debugfs: features ^resize_inode
debugfs: clri <7>
debugfs: quit

And then run e2fsck -f /dev/md0.

					- Ted

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Should Never Happen: Resize Inode Corrupt
  2019-03-17 21:19       ` Theodore Ts'o
@ 2019-03-18  5:24         ` Burke Harper
  0 siblings, 0 replies; 6+ messages in thread
From: Burke Harper @ 2019-03-18  5:24 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: Andreas Dilger, linux-ext4

Correct.  The it was previously 36T.  A few weeks before that it was
28T.  The resize from 28T to 36T performed just fine.

I've upgraded to 1.44.5, cleared the resize_inode feature, and have
restarted e2fsck -f /dev/md0.

Thanks.  I'll check back periodically as this part seems to take a long while.


On Sun, Mar 17, 2019 at 5:19 PM Theodore Ts'o <tytso@mit.edu> wrote:
>
> On Sat, Mar 16, 2019 at 12:57:57PM -0600, Andreas Dilger wrote:
> > You could kill e2fsck and disable the resize_inode feature?  There is a different resize mechanism available now (meta_bg) that doesn't need it.
>
> It looks like the file system was previously 36T and you were trying
> to resize it to 51T.  Is that right?  The resize_inode feature should
> not have been present at all; it's not valid for file systems > 32TiB.
>
> The resize2fs in 1.42 is more than a little bit buggy when dealing
> with large file systems > 32TiB, and it sounds like there were some
> problems dealing with the transition from file systems smaller than 32
> TiB (where the resize_inode still works), and file systems > 32 TiB
> (where we use a new style of on-line resizing, called meta_bg.
>
> Hopefully that's because you used an old 1.42 resize2fs when you
> resized it up to 36 TiB, but we should test to make sure it's
> currently working correctly.
>
> Similarly, e2fsck shouldn't be even trying to deal with the resize
> inode if the file system size > 32 TiB.  (Or to be more
> accurate/pedantic, when the max. block number no longer fits in a
> 32-bit integer; although if someone is using a 1k or 2k block file
> system on a file system that larger, they have other problems.  :-)
>
> So yeah, the first thing I would use debugfs to clear the resize_inode
> feature:
>
> debugfs -w /dev/md0
> debugfs: features ^resize_inode
> debugfs: clri <7>
> debugfs: quit
>
> And then run e2fsck -f /dev/md0.
>
>                                         - Ted

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-03-18  5:24 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-15  6:38 Should Never Happen: Resize Inode Corrupt Burke Harper
2019-03-15 19:19 ` Andreas Dilger
2019-03-16 17:11   ` Burke Harper
2019-03-16 18:57     ` Andreas Dilger
2019-03-17 21:19       ` Theodore Ts'o
2019-03-18  5:24         ` Burke Harper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.