linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "John Stoffel" <john@stoffel.org>
To: Marcus Lell <marcus.lell@gmail.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: raid5 size reduced after system reinstall, how to fix?
Date: Thu, 14 Jan 2021 16:43:04 -0500	[thread overview]
Message-ID: <24576.47848.628487.833800@quad.stoffel.home> (raw)
In-Reply-To: <CAM7EtNjpS3yr=3XtGkgfc3=L=fSfqJW7P8mSZ__+L7fwjLu8eA@mail.gmail.com>

>>>>> "Marcus" == Marcus Lell <marcus.lell@gmail.com> writes:


Marcus> after reinstalling gentoo on my main system, I had to find
Marcus> out, that my clean raid5 has been resized from ca. 18,2TB to
Marcus> ca. 2,2 TB .  I don't have any clue, why..

How full was your array before?  And what filesystem are you using on
there?  Does the filesystem pass 'fsck' checks?  And did you lose any
data that you know of?

Can you show us the output of:  cat /proc/partitions

because it sounds like you're disks got messed up somewhere.
Hmm... did you use the full disks before by any chance?  /dev/sdb,
/dev/sdc and /dev/sdd instead of /dev/sdb1, sdc1 and sdd1?  That might
be the problem.

If the partitions are too small, what I might do is:

0. don't panic!  And don't do anything without thinking and taking
   your time.
1. stop the array completely.
2. goto the RAID wiki and look at the instructions for how to setup
   overlays, so you dont' write to your disks while expermienting.  

   https://raid.wiki.kernel.org/index.php/Linux_Raid

3. Examine sdb, and compare it with sdb1.  See the difference?  You
   might have done whole disk setup, instead of using partitions.  

   mdadm -E /dev/sdb


4. Check your partitioning.

   It might be possible (if this is the problem) to just extend your
   paritions to the end of the disk, and then re-mount your disks.


Good luck!  Let us know what you find.

John


Marcus> First, the array gets assembled successfully.

Marcus> lxcore ~ # cat /proc/mdstat
Marcus> Personalities : [raid1] [raid6] [raid5] [raid4]
Marcus> md0 : active raid5 sdd1[2] sdb1[1] sdc1[0]
Marcus>       2352740224 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
Marcus>       bitmap: 0/9 pages [0KB], 65536KB chunk

Marcus> unused devices: <none>

Marcus> lxcore ~ # mdadm --detail /dev/md0
Marcus> /dev/md0:
Marcus>            Version : 1.2
Marcus>      Creation Time : Sat Nov 10 23:04:14 2018
Marcus>         Raid Level : raid5
Marcus>         Array Size : 2352740224 (2243.75 GiB 2409.21 GB)
Marcus>      Used Dev Size : 1176370112 (1121.87 GiB 1204.60 GB)
Marcus>       Raid Devices : 3
Marcus>      Total Devices : 3
Marcus>        Persistence : Superblock is persistent

Marcus>      Intent Bitmap : Internal

Marcus>        Update Time : Tue Jan 12 14:58:40 2021
Marcus>              State : clean
Marcus>     Active Devices : 3
Marcus>    Working Devices : 3
Marcus>     Failed Devices : 0
Marcus>      Spare Devices : 0

Marcus>             Layout : left-symmetric
Marcus>         Chunk Size : 64K

Marcus> Consistency Policy : bitmap

Marcus>               Name : lxcore:0  (local to host lxcore)
Marcus>               UUID : 0e3c432b:c68cda5b:0bf31e79:9dfe252b
Marcus>             Events : 80471

Marcus>     Number   Major   Minor   RaidDevice State
Marcus>        0       8       33        0      active sync   /dev/sdc1
Marcus>        1       8       17        1      active sync   /dev/sdb1
Marcus>        2       8       49        2      active sync   /dev/sdd1

Marcus> but the partitions are ok

Marcus> lxcore ~ # fdisk -l /dev/sdb1
Marcus> Disk /dev/sdb1: 9.1 TiB, 10000830283264 bytes, 19532871647 sectors
Marcus> Units: sectors of 1 * 512 = 512 bytes
Marcus> Sector size (logical/physical): 512 bytes / 4096 bytes
Marcus> I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Marcus> same with sdc1 and sdd1
Marcus> Here is the problem:

Marcus> lxcore ~ # mdadm --examine /dev/sdb1
Marcus> /dev/sdb1:
Marcus>           Magic : a92b4efc
Marcus>         Version : 1.2
Marcus>     Feature Map : 0x1
Marcus>      Array UUID : 0e3c432b:c68cda5b:0bf31e79:9dfe252b
Marcus>            Name : lxcore:0  (local to host lxcore)
Marcus>   Creation Time : Sat Nov 10 23:04:14 2018
Marcus>      Raid Level : raid5
Marcus>    Raid Devices : 3

Marcus>  Avail Dev Size : 19532609759 (9313.87 GiB 10000.70 GB)
Marcus>      Array Size : 2352740224 (2243.75 GiB 2409.21 GB)
Marcus>   Used Dev Size : 2352740224 (1121.87 GiB 1204.60 GB)
Marcus>     Data Offset : 261888 sectors
Marcus>    Super Offset : 8 sectors
Marcus>    Unused Space : before=261800 sectors, after=17179869535 sectors
Marcus>           State : clean
Marcus>     Device UUID : a8fbe4dd:a7ac9c16:d1d29abd:e2a0d573

Marcus> Internal Bitmap : 8 sectors from superblock
Marcus>     Update Time : Tue Jan 12 14:58:40 2021
Marcus>   Bad Block Log : 512 entries available at offset 72 sectors
Marcus>        Checksum : 86f42300 - correct
Marcus>          Events : 80471

Marcus>          Layout : left-symmetric
Marcus>      Chunk Size : 64K

Marcus>    Device Role : Active device 1
Marcus>    Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)

Marcus> same with /dev/sdc1 and /dev/sdd1
Marcus> It shows, that only 1121.68GB are used instead of the
Marcus> available 9.1TB, so the array size results in 2.2TB


Marcus> Will a simple
Marcus> mdadm --grow --size=max /dev/md0
Marcus> fix this and leave the data untouched?

Marcus> Any further advice?

Marcus> Have a nice day.

Marcus> marcus

Marcus> please CC me, I'm not subscribed.

  reply	other threads:[~2021-01-14 21:43 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-14  8:42 raid5 size reduced after system reinstall, how to fix? Marcus Lell
2021-01-14 21:43 ` John Stoffel [this message]
2021-01-15  7:57   ` Song Liu
2021-01-17  9:13     ` Marcus Lell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=24576.47848.628487.833800@quad.stoffel.home \
    --to=john@stoffel.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=marcus.lell@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).