linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* raid5 size reduced after system reinstall, how to fix?
@ 2021-01-14  8:42 Marcus Lell
  2021-01-14 21:43 ` John Stoffel
  0 siblings, 1 reply; 4+ messages in thread
From: Marcus Lell @ 2021-01-14  8:42 UTC (permalink / raw)
  To: linux-raid

Hello,

after reinstalling gentoo on my main system,
I had to find out, that my clean raid5 has been resized
from ca. 18,2TB to ca. 2,2 TB
.
I don't have any clue, why..

First, the array gets assembled successfully.

lxcore ~ # cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[2] sdb1[1] sdc1[0]
      2352740224 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/9 pages [0KB], 65536KB chunk

unused devices: <none>

lxcore ~ # mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sat Nov 10 23:04:14 2018
        Raid Level : raid5
        Array Size : 2352740224 (2243.75 GiB 2409.21 GB)
     Used Dev Size : 1176370112 (1121.87 GiB 1204.60 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Tue Jan 12 14:58:40 2021
             State : clean
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : bitmap

              Name : lxcore:0  (local to host lxcore)
              UUID : 0e3c432b:c68cda5b:0bf31e79:9dfe252b
            Events : 80471

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       17        1      active sync   /dev/sdb1
       2       8       49        2      active sync   /dev/sdd1

but the partitions are ok

lxcore ~ # fdisk -l /dev/sdb1
Disk /dev/sdb1: 9.1 TiB, 10000830283264 bytes, 19532871647 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

same with sdc1 and sdd1
Here is the problem:

lxcore ~ # mdadm --examine /dev/sdb1
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 0e3c432b:c68cda5b:0bf31e79:9dfe252b
           Name : lxcore:0  (local to host lxcore)
  Creation Time : Sat Nov 10 23:04:14 2018
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 19532609759 (9313.87 GiB 10000.70 GB)
     Array Size : 2352740224 (2243.75 GiB 2409.21 GB)
  Used Dev Size : 2352740224 (1121.87 GiB 1204.60 GB)
    Data Offset : 261888 sectors
   Super Offset : 8 sectors
   Unused Space : before=261800 sectors, after=17179869535 sectors
          State : clean
    Device UUID : a8fbe4dd:a7ac9c16:d1d29abd:e2a0d573

Internal Bitmap : 8 sectors from superblock
    Update Time : Tue Jan 12 14:58:40 2021
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 86f42300 - correct
         Events : 80471

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)

same with /dev/sdc1 and /dev/sdd1
It shows, that only 1121.68GB are used instead of the
available 9.1TB, so the array size results in 2.2TB


Will a simple
mdadm --grow --size=max /dev/md0
fix this and leave the data untouched?

Any further advice?

Have a nice day.

marcus

please CC me, I'm not subscribed.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: raid5 size reduced after system reinstall, how to fix?
  2021-01-14  8:42 raid5 size reduced after system reinstall, how to fix? Marcus Lell
@ 2021-01-14 21:43 ` John Stoffel
  2021-01-15  7:57   ` Song Liu
  0 siblings, 1 reply; 4+ messages in thread
From: John Stoffel @ 2021-01-14 21:43 UTC (permalink / raw)
  To: Marcus Lell; +Cc: linux-raid

>>>>> "Marcus" == Marcus Lell <marcus.lell@gmail.com> writes:


Marcus> after reinstalling gentoo on my main system, I had to find
Marcus> out, that my clean raid5 has been resized from ca. 18,2TB to
Marcus> ca. 2,2 TB .  I don't have any clue, why..

How full was your array before?  And what filesystem are you using on
there?  Does the filesystem pass 'fsck' checks?  And did you lose any
data that you know of?

Can you show us the output of:  cat /proc/partitions

because it sounds like you're disks got messed up somewhere.
Hmm... did you use the full disks before by any chance?  /dev/sdb,
/dev/sdc and /dev/sdd instead of /dev/sdb1, sdc1 and sdd1?  That might
be the problem.

If the partitions are too small, what I might do is:

0. don't panic!  And don't do anything without thinking and taking
   your time.
1. stop the array completely.
2. goto the RAID wiki and look at the instructions for how to setup
   overlays, so you dont' write to your disks while expermienting.  

   https://raid.wiki.kernel.org/index.php/Linux_Raid

3. Examine sdb, and compare it with sdb1.  See the difference?  You
   might have done whole disk setup, instead of using partitions.  

   mdadm -E /dev/sdb


4. Check your partitioning.

   It might be possible (if this is the problem) to just extend your
   paritions to the end of the disk, and then re-mount your disks.


Good luck!  Let us know what you find.

John


Marcus> First, the array gets assembled successfully.

Marcus> lxcore ~ # cat /proc/mdstat
Marcus> Personalities : [raid1] [raid6] [raid5] [raid4]
Marcus> md0 : active raid5 sdd1[2] sdb1[1] sdc1[0]
Marcus>       2352740224 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
Marcus>       bitmap: 0/9 pages [0KB], 65536KB chunk

Marcus> unused devices: <none>

Marcus> lxcore ~ # mdadm --detail /dev/md0
Marcus> /dev/md0:
Marcus>            Version : 1.2
Marcus>      Creation Time : Sat Nov 10 23:04:14 2018
Marcus>         Raid Level : raid5
Marcus>         Array Size : 2352740224 (2243.75 GiB 2409.21 GB)
Marcus>      Used Dev Size : 1176370112 (1121.87 GiB 1204.60 GB)
Marcus>       Raid Devices : 3
Marcus>      Total Devices : 3
Marcus>        Persistence : Superblock is persistent

Marcus>      Intent Bitmap : Internal

Marcus>        Update Time : Tue Jan 12 14:58:40 2021
Marcus>              State : clean
Marcus>     Active Devices : 3
Marcus>    Working Devices : 3
Marcus>     Failed Devices : 0
Marcus>      Spare Devices : 0

Marcus>             Layout : left-symmetric
Marcus>         Chunk Size : 64K

Marcus> Consistency Policy : bitmap

Marcus>               Name : lxcore:0  (local to host lxcore)
Marcus>               UUID : 0e3c432b:c68cda5b:0bf31e79:9dfe252b
Marcus>             Events : 80471

Marcus>     Number   Major   Minor   RaidDevice State
Marcus>        0       8       33        0      active sync   /dev/sdc1
Marcus>        1       8       17        1      active sync   /dev/sdb1
Marcus>        2       8       49        2      active sync   /dev/sdd1

Marcus> but the partitions are ok

Marcus> lxcore ~ # fdisk -l /dev/sdb1
Marcus> Disk /dev/sdb1: 9.1 TiB, 10000830283264 bytes, 19532871647 sectors
Marcus> Units: sectors of 1 * 512 = 512 bytes
Marcus> Sector size (logical/physical): 512 bytes / 4096 bytes
Marcus> I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Marcus> same with sdc1 and sdd1
Marcus> Here is the problem:

Marcus> lxcore ~ # mdadm --examine /dev/sdb1
Marcus> /dev/sdb1:
Marcus>           Magic : a92b4efc
Marcus>         Version : 1.2
Marcus>     Feature Map : 0x1
Marcus>      Array UUID : 0e3c432b:c68cda5b:0bf31e79:9dfe252b
Marcus>            Name : lxcore:0  (local to host lxcore)
Marcus>   Creation Time : Sat Nov 10 23:04:14 2018
Marcus>      Raid Level : raid5
Marcus>    Raid Devices : 3

Marcus>  Avail Dev Size : 19532609759 (9313.87 GiB 10000.70 GB)
Marcus>      Array Size : 2352740224 (2243.75 GiB 2409.21 GB)
Marcus>   Used Dev Size : 2352740224 (1121.87 GiB 1204.60 GB)
Marcus>     Data Offset : 261888 sectors
Marcus>    Super Offset : 8 sectors
Marcus>    Unused Space : before=261800 sectors, after=17179869535 sectors
Marcus>           State : clean
Marcus>     Device UUID : a8fbe4dd:a7ac9c16:d1d29abd:e2a0d573

Marcus> Internal Bitmap : 8 sectors from superblock
Marcus>     Update Time : Tue Jan 12 14:58:40 2021
Marcus>   Bad Block Log : 512 entries available at offset 72 sectors
Marcus>        Checksum : 86f42300 - correct
Marcus>          Events : 80471

Marcus>          Layout : left-symmetric
Marcus>      Chunk Size : 64K

Marcus>    Device Role : Active device 1
Marcus>    Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)

Marcus> same with /dev/sdc1 and /dev/sdd1
Marcus> It shows, that only 1121.68GB are used instead of the
Marcus> available 9.1TB, so the array size results in 2.2TB


Marcus> Will a simple
Marcus> mdadm --grow --size=max /dev/md0
Marcus> fix this and leave the data untouched?

Marcus> Any further advice?

Marcus> Have a nice day.

Marcus> marcus

Marcus> please CC me, I'm not subscribed.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: raid5 size reduced after system reinstall, how to fix?
  2021-01-14 21:43 ` John Stoffel
@ 2021-01-15  7:57   ` Song Liu
  2021-01-17  9:13     ` Marcus Lell
  0 siblings, 1 reply; 4+ messages in thread
From: Song Liu @ 2021-01-15  7:57 UTC (permalink / raw)
  To: John Stoffel; +Cc: Marcus Lell, linux-raid

On Thu, Jan 14, 2021 at 1:45 PM John Stoffel <john@stoffel.org> wrote:
>
> >>>>> "Marcus" == Marcus Lell <marcus.lell@gmail.com> writes:
>
>
> Marcus> after reinstalling gentoo on my main system, I had to find
> Marcus> out, that my clean raid5 has been resized from ca. 18,2TB to
> Marcus> ca. 2,2 TB .  I don't have any clue, why..
>
> How full was your array before?  And what filesystem are you using on
> there?  Does the filesystem pass 'fsck' checks?  And did you lose any
> data that you know of?
>
> Can you show us the output of:  cat /proc/partitions
>
> because it sounds like you're disks got messed up somewhere.
> Hmm... did you use the full disks before by any chance?  /dev/sdb,
> /dev/sdc and /dev/sdd instead of /dev/sdb1, sdc1 and sdd1?  That might
> be the problem.
>
> If the partitions are too small, what I might do is:
>
> 0. don't panic!  And don't do anything without thinking and taking
>    your time.
> 1. stop the array completely.
> 2. goto the RAID wiki and look at the instructions for how to setup
>    overlays, so you dont' write to your disks while expermienting.
>
>    https://raid.wiki.kernel.org/index.php/Linux_Raid
>
> 3. Examine sdb, and compare it with sdb1.  See the difference?  You
>    might have done whole disk setup, instead of using partitions.
>
>    mdadm -E /dev/sdb
>
>
> 4. Check your partitioning.
>
>    It might be possible (if this is the problem) to just extend your
>    paritions to the end of the disk, and then re-mount your disks.
>
>
> Good luck!  Let us know what you find.
>
> John
>
>
> Marcus> First, the array gets assembled successfully.
>
> Marcus> lxcore ~ # cat /proc/mdstat
> Marcus> Personalities : [raid1] [raid6] [raid5] [raid4]
> Marcus> md0 : active raid5 sdd1[2] sdb1[1] sdc1[0]
> Marcus>       2352740224 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
> Marcus>       bitmap: 0/9 pages [0KB], 65536KB chunk
>
> Marcus> unused devices: <none>
>
> Marcus> lxcore ~ # mdadm --detail /dev/md0
> Marcus> /dev/md0:
> Marcus>            Version : 1.2
> Marcus>      Creation Time : Sat Nov 10 23:04:14 2018
> Marcus>         Raid Level : raid5
> Marcus>         Array Size : 2352740224 (2243.75 GiB 2409.21 GB)
> Marcus>      Used Dev Size : 1176370112 (1121.87 GiB 1204.60 GB)
> Marcus>       Raid Devices : 3
> Marcus>      Total Devices : 3
> Marcus>        Persistence : Superblock is persistent
>
> Marcus>      Intent Bitmap : Internal
>
> Marcus>        Update Time : Tue Jan 12 14:58:40 2021
> Marcus>              State : clean
> Marcus>     Active Devices : 3
> Marcus>    Working Devices : 3
> Marcus>     Failed Devices : 0
> Marcus>      Spare Devices : 0
>
> Marcus>             Layout : left-symmetric
> Marcus>         Chunk Size : 64K
>
> Marcus> Consistency Policy : bitmap
>
> Marcus>               Name : lxcore:0  (local to host lxcore)
> Marcus>               UUID : 0e3c432b:c68cda5b:0bf31e79:9dfe252b
> Marcus>             Events : 80471
>
> Marcus>     Number   Major   Minor   RaidDevice State
> Marcus>        0       8       33        0      active sync   /dev/sdc1
> Marcus>        1       8       17        1      active sync   /dev/sdb1
> Marcus>        2       8       49        2      active sync   /dev/sdd1
>
> Marcus> but the partitions are ok
>
> Marcus> lxcore ~ # fdisk -l /dev/sdb1
> Marcus> Disk /dev/sdb1: 9.1 TiB, 10000830283264 bytes, 19532871647 sectors
> Marcus> Units: sectors of 1 * 512 = 512 bytes
> Marcus> Sector size (logical/physical): 512 bytes / 4096 bytes
> Marcus> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
>
> Marcus> same with sdc1 and sdd1
> Marcus> Here is the problem:
>
> Marcus> lxcore ~ # mdadm --examine /dev/sdb1
> Marcus> /dev/sdb1:
> Marcus>           Magic : a92b4efc
> Marcus>         Version : 1.2
> Marcus>     Feature Map : 0x1
> Marcus>      Array UUID : 0e3c432b:c68cda5b:0bf31e79:9dfe252b
> Marcus>            Name : lxcore:0  (local to host lxcore)
> Marcus>   Creation Time : Sat Nov 10 23:04:14 2018
> Marcus>      Raid Level : raid5
> Marcus>    Raid Devices : 3
>
> Marcus>  Avail Dev Size : 19532609759 (9313.87 GiB 10000.70 GB)
> Marcus>      Array Size : 2352740224 (2243.75 GiB 2409.21 GB)
> Marcus>   Used Dev Size : 2352740224 (1121.87 GiB 1204.60 GB)
> Marcus>     Data Offset : 261888 sectors
> Marcus>    Super Offset : 8 sectors
> Marcus>    Unused Space : before=261800 sectors, after=17179869535 sectors
> Marcus>           State : clean
> Marcus>     Device UUID : a8fbe4dd:a7ac9c16:d1d29abd:e2a0d573
>
> Marcus> Internal Bitmap : 8 sectors from superblock
> Marcus>     Update Time : Tue Jan 12 14:58:40 2021
> Marcus>   Bad Block Log : 512 entries available at offset 72 sectors
> Marcus>        Checksum : 86f42300 - correct
> Marcus>          Events : 80471
>
> Marcus>          Layout : left-symmetric
> Marcus>      Chunk Size : 64K
>
> Marcus>    Device Role : Active device 1
> Marcus>    Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
>
> Marcus> same with /dev/sdc1 and /dev/sdd1
> Marcus> It shows, that only 1121.68GB are used instead of the
> Marcus> available 9.1TB, so the array size results in 2.2TB
>
>
> Marcus> Will a simple
> Marcus> mdadm --grow --size=max /dev/md0
> Marcus> fix this and leave the data untouched?

Are you running kernel 5.10? If so, please upgrade to 5.10.1 or later.
There was an
bug in 5.10 kernel. After upgrading the kernel, the mdadm --grow command above
should fix this. The --grow will trigger a resync for the newly grown
area. If the array
was not in degraded mode, the resync should not change any data.

Please let me know if this works.

Thanks,
Song

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: raid5 size reduced after system reinstall, how to fix?
  2021-01-15  7:57   ` Song Liu
@ 2021-01-17  9:13     ` Marcus Lell
  0 siblings, 0 replies; 4+ messages in thread
From: Marcus Lell @ 2021-01-17  9:13 UTC (permalink / raw)
  To: Song Liu; +Cc: John Stoffel, linux-raid

Hi Song,

On Fri, Jan 15, 2021 at 8:58 AM Song Liu <song@kernel.org> wrote:
>
> Are you running kernel 5.10? If so, please upgrade to 5.10.1 or later.
> There was an
> bug in 5.10 kernel. After upgrading the kernel, the mdadm --grow command above
> should fix this. The --grow will trigger a resync for the newly grown
> area. If the array
> was not in degraded mode, the resync should not change any data.
I was indeed running 5.10.0 a few times.
Upgraded and running "mdadm --grow --size=max /dev/md0" fixed it completely.

>
> Please let me know if this works.
Yes.

Thank you.

marcus

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-01-17  9:15 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-14  8:42 raid5 size reduced after system reinstall, how to fix? Marcus Lell
2021-01-14 21:43 ` John Stoffel
2021-01-15  7:57   ` Song Liu
2021-01-17  9:13     ` Marcus Lell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).