All of lore.kernel.org
 help / color / mirror / Atom feed
* mdadm raid6 recovery status
@ 2012-03-28  2:22 Paramasivam, Meenakshisundaram
  2012-03-28  4:11 ` NeilBrown
  0 siblings, 1 reply; 10+ messages in thread
From: Paramasivam, Meenakshisundaram @ 2012-03-28  2:22 UTC (permalink / raw)
  To: linux-raid


Hi,

We have 8 disks (8x2TB=16TB) on an enclosure, with mdadm raid 6 serving ~12TB volume 
from a SATA JBOD through 2 e-SATA ports to host machine. We  used up 7 TB. One of e-SATA port 
connecting 4 drives did not respond on controller, possibly due heat issues from adjacent video card 
FX4800, or the controller momentarily went bad since we also had power glitches on multiple machines 
at the same time. This incident made mdadm to report the 4 drives  with "removed" status. After 
relocating the card inside the machine and rebooting, all of the drives physically came up fine. 
Now I was hoping to get 7 TB of data back, and I did:

#mdadm --assemble --scan
mdadm: /dev/md2 assembled from 4 drives - not enough to start the array.

Now I wanted to recover the data, so I did the following steps. I see 
only 2 TB of 7 TB now when I mount. I know that data is still on drives 
and I did not zero-superblock on any of the drives. How do I get data 
back, reliably and quickly? Using mdadm v3.1.2  on FC14. Any pointers/suggestions
 is very much appreciated. Thanks. 

#mdadm -v --assemble --force /dev/md2 /dev/sd{b,c,d,e,f,g,h,i}
mdadm: looking for devices for /dev/md2
mdadm: /dev/sdb is identified as a member of /dev/md2, slot 0.
mdadm: /dev/sdc is identified as a member of /dev/md2, slot 1.
mdadm: /dev/sdd is identified as a member of /dev/md2, slot 2.
mdadm: /dev/sde is identified as a member of /dev/md2, slot 3.
mdadm: /dev/sdf is identified as a member of /dev/md2, slot 4.
mdadm: /dev/sdg is identified as a member of /dev/md2, slot 5.
mdadm: /dev/sdh is identified as a member of /dev/md2, slot 6.
mdadm: /dev/sdi is identified as a member of /dev/md2, slot 7.
mdadm: forcing event count in /dev/sdh(6) from 220810 upto 220815
mdadm: forcing event count in /dev/sdf(4) from 220809 upto 220815
mdadm: forcing event count in /dev/sdi(7) from 220809 upto 220815
mdadm: clearing FAULTY flag for device 4 in /dev/md2 for /dev/sdf
mdadm: clearing FAULTY flag for device 6 in /dev/md2 for /dev/sdh
mdadm: clearing FAULTY flag for device 7 in /dev/md2 for /dev/sdi
mdadm: added /dev/sdc to /dev/md2 as 1
mdadm: added /dev/sdd to /dev/md2 as 2
mdadm: added /dev/sde to /dev/md2 as 3
mdadm: added /dev/sdf to /dev/md2 as 4
mdadm: added /dev/sdg to /dev/md2 as 5
mdadm: added /dev/sdh to /dev/md2 as 6
mdadm: added /dev/sdi to /dev/md2 as 7
mdadm: added /dev/sdb to /dev/md2 as 0
mdadm: /dev/md2 has been started with 7 drives (out of 8).

# mdadm -D /dev/md2
/dev/md2:
        Version : 0.90
  Creation Time : Fri Dec 16 17:56:14 2011
     Raid Level : raid6
     Array Size : 11721086976 (11178.10 GiB 12002.39 GB)
  Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)
   Raid Devices : 8
  Total Devices : 7
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Mon Mar 26 13:53:25 2012
          State : clean, degraded

 Active Devices : 7
Working Devices : 7
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K
           UUID : 4fcdcafa:fea0c196:4d5dd1d0:da2b21e5
         Events : 0.220827

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde
       4       8       80        4      active sync   /dev/sdf
       5       0        0        5      removed
       6       8      112        6      active sync   /dev/sdh
       7       8      128        7      active sync   /dev/sdi

# mdadm /dev/md2 -a /dev/sdg
mdadm: re-added /dev/sdg

This step took 24 hrs to rebuild
#mdadm --stop

#mdadm --assemble --scan
        where /etc/mdadm.conf has a line
            ARRAY /dev/md2 level=raid6 num-devices=8 \
                       devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi
            [yes no partitions, had been thsi way and has always worked]

#mount   /dev/md2   /myarray
mounts fine, however, I now see only 2TB instead of 7 TB for /myarray. I need to get all of the data back and I am stuck here.

Sundar








^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: mdadm raid6 recovery status
  2012-03-28  2:22 mdadm raid6 recovery status Paramasivam, Meenakshisundaram
@ 2012-03-28  4:11 ` NeilBrown
  2012-03-28 12:49   ` Paramasivam, Meenakshisundaram
  0 siblings, 1 reply; 10+ messages in thread
From: NeilBrown @ 2012-03-28  4:11 UTC (permalink / raw)
  To: Paramasivam, Meenakshisundaram; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 439 bytes --]

On Wed, 28 Mar 2012 02:22:33 +0000 "Paramasivam, Meenakshisundaram"
<mparamas@iupui.edu> wrote:


> #mount   /dev/md2   /myarray
> mounts fine, however, I now see only 2TB instead of 7 TB for /myarray. I need to get all of the data back and I am stuck here.

What exactly do you mean by "see only 2TB"??
The md array is reporting that it is 12TB in size (12000 GB).

What reports 2TB?  /proc/partitions? df? du? ?/

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: mdadm raid6 recovery status
  2012-03-28  4:11 ` NeilBrown
@ 2012-03-28 12:49   ` Paramasivam, Meenakshisundaram
  2012-03-28 19:39     ` John Robinson
  2012-03-28 23:27     ` NeilBrown
  0 siblings, 2 replies; 10+ messages in thread
From: Paramasivam, Meenakshisundaram @ 2012-03-28 12:49 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

[root@in-rady-neuro9 ~]# df -kl /myarray
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md2             11537161976    162432 10950945196   1% /myarray
Should be 7TB of used space.

[root@in-rady-neuro9 ~]# cat /proc/partitions
major minor  #blocks  name

   8        0  438960128 sda
   8        1     512000 sda1
   8        2   51200000 sda2
   8        3  387247104 sda3
   8       16 1953514584 sdb
   8       32 1953514584 sdc
   8       48 1953514584 sdd
   8       64 1953514584 sde
   8       80 1953514584 sdf
   8       96 1953514584 sdg
   8      112 1953514584 sdh
   8      128 1953514584 sdi
 253        0  346226688 dm-0
 253        1   40992768 dm-1

sd[b-i] are raid devices

[root@in-rady-neuro9 ~]# mdadm --detail /dev/md2
/dev/md2:
        Version : 0.90
  Creation Time : Fri Dec 16 17:56:14 2011
     Raid Level : raid6
     Array Size : 11721086976 (11178.10 GiB 12002.39 GB)
  Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)  <<<====== Wrong! Should be 7TB of used array space.
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Tue Mar 27 14:58:01 2012
          State : clean
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 4fcdcafa:fea0c196:4d5dd1d0:da2b21e5
         Events : 0.220864

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde
       4       8       80        4      active sync   /dev/sdf
       5       8       96        5      active sync   /dev/sdg
       6       8      112        6      active sync   /dev/sdh
       7       8      128        7      active sync   /dev/sdi
[root@in-rady-neuro9 ~]#


________________________________________
From: NeilBrown [neilb@suse.de]
Sent: Wednesday, March 28, 2012 12:11 AM
To: Paramasivam, Meenakshisundaram
Cc: linux-raid@vger.kernel.org
Subject: Re: mdadm raid6 recovery status

On Wed, 28 Mar 2012 02:22:33 +0000 "Paramasivam, Meenakshisundaram"
<mparamas@iupui.edu> wrote:


> #mount   /dev/md2   /myarray
> mounts fine, however, I now see only 2TB instead of 7 TB for /myarray. I need to get all of the data back and I am stuck here.

What exactly do you mean by "see only 2TB"??
The md array is reporting that it is 12TB in size (12000 GB).

What reports 2TB?  /proc/partitions? df? du? ?/

NeilBrown

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: mdadm raid6 recovery status
  2012-03-28 12:49   ` Paramasivam, Meenakshisundaram
@ 2012-03-28 19:39     ` John Robinson
  2012-03-28 23:27     ` NeilBrown
  1 sibling, 0 replies; 10+ messages in thread
From: John Robinson @ 2012-03-28 19:39 UTC (permalink / raw)
  To: Paramasivam, Meenakshisundaram; +Cc: linux-raid

On 28/03/2012 13:49, Paramasivam, Meenakshisundaram wrote:
> [root@in-rady-neuro9 ~]# df -kl /myarray
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/md2             11537161976    162432 10950945196   1% /myarray
> Should be 7TB of used space.

The device is showing as a 12TB array. Maybe fsck lost some of your data?

> [root@in-rady-neuro9 ~]# mdadm --detail /dev/md2
> /dev/md2:
>          Version : 0.90
>    Creation Time : Fri Dec 16 17:56:14 2011
>       Raid Level : raid6
>       Array Size : 11721086976 (11178.10 GiB 12002.39 GB)
>    Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)<<<====== Wrong! Should be 7TB of used array space.

No it shouldn't. Your component devices are indeed 2TB, and that is what 
is shown here.

md doesn't know how much data you have on the filesystem on top of the 
array.

Cheers,

John.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: mdadm raid6 recovery status
  2012-03-28 12:49   ` Paramasivam, Meenakshisundaram
  2012-03-28 19:39     ` John Robinson
@ 2012-03-28 23:27     ` NeilBrown
  2012-03-29 17:33       ` Paramasivam, Meenakshisundaram
  1 sibling, 1 reply; 10+ messages in thread
From: NeilBrown @ 2012-03-28 23:27 UTC (permalink / raw)
  To: Paramasivam, Meenakshisundaram; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1761 bytes --]

On Wed, 28 Mar 2012 12:49:18 +0000 "Paramasivam, Meenakshisundaram"
<mparamas@iupui.edu> wrote:

> [root@in-rady-neuro9 ~]# df -kl /myarray
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/md2             11537161976    162432 10950945196   1% /myarray
> Should be 7TB of used space.

This is bad.  Something has happened to your filesystem.
It is almost as though someone ran "mkfs" on the array.
I don't know much about recovery after such an action, but I doubt you
will get much back.

> 
> [root@in-rady-neuro9 ~]# cat /proc/partitions
> major minor  #blocks  name
> 
>    8        0  438960128 sda
>    8        1     512000 sda1
>    8        2   51200000 sda2
>    8        3  387247104 sda3
>    8       16 1953514584 sdb
>    8       32 1953514584 sdc
>    8       48 1953514584 sdd
>    8       64 1953514584 sde
>    8       80 1953514584 sdf
>    8       96 1953514584 sdg
>    8      112 1953514584 sdh
>    8      128 1953514584 sdi
>  253        0  346226688 dm-0
>  253        1   40992768 dm-1

No md2 ???

> 
> sd[b-i] are raid devices
> 
> [root@in-rady-neuro9 ~]# mdadm --detail /dev/md2
> /dev/md2:
>         Version : 0.90
>   Creation Time : Fri Dec 16 17:56:14 2011
>      Raid Level : raid6
>      Array Size : 11721086976 (11178.10 GiB 12002.39 GB)
>   Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)  <<<====== Wrong! Should be 7TB of used array space.

"Used Dev Size" isn't "how much of the array is used by the filesystem" -
mdadm doesn't know anything about filesystems.
It is "How much of each individual device is used by the array", which is
usually a little less than the size of the smallest device.
So 2TB is correct here.


NeilBrown



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: mdadm raid6 recovery status
  2012-03-28 23:27     ` NeilBrown
@ 2012-03-29 17:33       ` Paramasivam, Meenakshisundaram
  2012-03-29 18:47         ` Paramasivam, Meenakshisundaram
  0 siblings, 1 reply; 10+ messages in thread
From: Paramasivam, Meenakshisundaram @ 2012-03-29 17:33 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid


Good news: Got ALL of our data back. [Actually it was 4.96TB not 7TB]. 
                     mdadm is a good one.

Bad news: "df" is reporting wrong, while "du" is showing full size.
# df -kl /myarray
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md2             11537161976    162432 10950945196   1% /myarray
# du -sk /myarray
5326133556      /myarray
#

I never looked into du or looked in depth of the files & folders and simply got mislead by reported "df" usage; data was there all along. We definitely want "df" for the array's filesystem (ext3) to report right.

Now that we are backing up all of the data (at 400 Mbps) over network, I want to know if "df" reporting can be fixed easily or should I do new array creation and restore data from backup. 

We are ordering a new RAID card, just to be on safer side.

Sundar

________________________________________
From: NeilBrown [neilb@suse.de]
Sent: Wednesday, March 28, 2012 7:27 PM
To: Paramasivam, Meenakshisundaram
Cc: linux-raid@vger.kernel.org
Subject: Re: mdadm raid6 recovery status

On Wed, 28 Mar 2012 12:49:18 +0000 "Paramasivam, Meenakshisundaram"
<mparamas@iupui.edu> wrote:

> [root@in-rady-neuro9 ~]# df -kl /myarray
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/md2             11537161976    162432 10950945196   1% /myarray
> Should be 7TB of used space.

This is bad.  Something has happened to your filesystem.
It is almost as though someone ran "mkfs" on the array.
I don't know much about recovery after such an action, but I doubt you
will get much back.

>
> [root@in-rady-neuro9 ~]# cat /proc/partitions
> major minor  #blocks  name
>
>    8        0  438960128 sda
>    8        1     512000 sda1
>    8        2   51200000 sda2
>    8        3  387247104 sda3
>    8       16 1953514584 sdb
>    8       32 1953514584 sdc
>    8       48 1953514584 sdd
>    8       64 1953514584 sde
>    8       80 1953514584 sdf
>    8       96 1953514584 sdg
>    8      112 1953514584 sdh
>    8      128 1953514584 sdi
>  253        0  346226688 dm-0
>  253        1   40992768 dm-1

No md2 ???

>
> sd[b-i] are raid devices
>
> [root@in-rady-neuro9 ~]# mdadm --detail /dev/md2
> /dev/md2:
>         Version : 0.90
>   Creation Time : Fri Dec 16 17:56:14 2011
>      Raid Level : raid6
>      Array Size : 11721086976 (11178.10 GiB 12002.39 GB)
>   Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)  <<<====== Wrong! Should be 7TB of used array space.

"Used Dev Size" isn't "how much of the array is used by the filesystem" -
mdadm doesn't know anything about filesystems.
It is "How much of each individual device is used by the array", which is
usually a little less than the size of the smallest device.
So 2TB is correct here.


NeilBrown



^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: mdadm raid6 recovery status
  2012-03-29 17:33       ` Paramasivam, Meenakshisundaram
@ 2012-03-29 18:47         ` Paramasivam, Meenakshisundaram
  2012-03-29 19:41           ` NeilBrown
  0 siblings, 1 reply; 10+ messages in thread
From: Paramasivam, Meenakshisundaram @ 2012-03-29 18:47 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid


Clarification:
>>should I do new array creation 
I meant running newfs on assembled 12 TB array, and restore data from backup, to resolve "df" reporting problem.

________________________________________
From: Paramasivam, Meenakshisundaram
Sent: Thursday, March 29, 2012 1:33 PM
To: NeilBrown
Cc: linux-raid@vger.kernel.org
Subject: RE: mdadm raid6 recovery status

Good news: Got ALL of our data back. [Actually it was 4.96TB not 7TB].
                     mdadm is a good one.

Bad news: "df" is reporting wrong, while "du" is showing full size.
# df -kl /myarray
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md2             11537161976    162432 10950945196   1% /myarray
# du -sk /myarray
5326133556      /myarray
#

I never looked into du or looked in depth of the files & folders and simply got mislead by reported "df" usage; data was there all along. We definitely want "df" for the array's filesystem (ext3) to report right.

Now that we are backing up all of the data (at 400 Mbps) over network, I want to know if "df" reporting can be fixed easily or should I do new array creation and restore data from backup.

We are ordering a new RAID card, just to be on safer side.

Sundar

________________________________________
From: NeilBrown [neilb@suse.de]
Sent: Wednesday, March 28, 2012 7:27 PM
To: Paramasivam, Meenakshisundaram
Cc: linux-raid@vger.kernel.org
Subject: Re: mdadm raid6 recovery status

On Wed, 28 Mar 2012 12:49:18 +0000 "Paramasivam, Meenakshisundaram"
<mparamas@iupui.edu> wrote:

> [root@in-rady-neuro9 ~]# df -kl /myarray
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/md2             11537161976    162432 10950945196   1% /myarray
> Should be 7TB of used space.

This is bad.  Something has happened to your filesystem.
It is almost as though someone ran "mkfs" on the array.
I don't know much about recovery after such an action, but I doubt you
will get much back.

>
> [root@in-rady-neuro9 ~]# cat /proc/partitions
> major minor  #blocks  name
>
>    8        0  438960128 sda
>    8        1     512000 sda1
>    8        2   51200000 sda2
>    8        3  387247104 sda3
>    8       16 1953514584 sdb
>    8       32 1953514584 sdc
>    8       48 1953514584 sdd
>    8       64 1953514584 sde
>    8       80 1953514584 sdf
>    8       96 1953514584 sdg
>    8      112 1953514584 sdh
>    8      128 1953514584 sdi
>  253        0  346226688 dm-0
>  253        1   40992768 dm-1

No md2 ???

>
> sd[b-i] are raid devices
>
> [root@in-rady-neuro9 ~]# mdadm --detail /dev/md2
> /dev/md2:
>         Version : 0.90
>   Creation Time : Fri Dec 16 17:56:14 2011
>      Raid Level : raid6
>      Array Size : 11721086976 (11178.10 GiB 12002.39 GB)
>   Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)  <<<====== Wrong! Should be 7TB of used array space.

"Used Dev Size" isn't "how much of the array is used by the filesystem" -
mdadm doesn't know anything about filesystems.
It is "How much of each individual device is used by the array", which is
usually a little less than the size of the smallest device.
So 2TB is correct here.


NeilBrown



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: mdadm raid6 recovery status
  2012-03-29 18:47         ` Paramasivam, Meenakshisundaram
@ 2012-03-29 19:41           ` NeilBrown
  2012-03-30 21:22             ` Paramasivam, Meenakshisundaram
  0 siblings, 1 reply; 10+ messages in thread
From: NeilBrown @ 2012-03-29 19:41 UTC (permalink / raw)
  To: Paramasivam, Meenakshisundaram; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 3790 bytes --]

On Thu, 29 Mar 2012 18:47:14 +0000 "Paramasivam, Meenakshisundaram"
<mparamas@iupui.edu> wrote:

> 
> Clarification:
> >>should I do new array creation 
> I meant running newfs on assembled 12 TB array, and restore data from backup, to resolve "df" reporting problem.

I would suggest asking on 
    linux-ext4@vger.kernel.org

be sure to give lots of details - kernel version etc.
It would be worth running 
   fsck -n /dev/md2
first and see if it reports anything strange.
Maybe  just a fsck will fix it.

NeilBrown


> 
> ________________________________________
> From: Paramasivam, Meenakshisundaram
> Sent: Thursday, March 29, 2012 1:33 PM
> To: NeilBrown
> Cc: linux-raid@vger.kernel.org
> Subject: RE: mdadm raid6 recovery status
> 
> Good news: Got ALL of our data back. [Actually it was 4.96TB not 7TB].
>                      mdadm is a good one.
> 
> Bad news: "df" is reporting wrong, while "du" is showing full size.
> # df -kl /myarray
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/md2             11537161976    162432 10950945196   1% /myarray
> # du -sk /myarray
> 5326133556      /myarray
> #
> 
> I never looked into du or looked in depth of the files & folders and simply got mislead by reported "df" usage; data was there all along. We definitely want "df" for the array's filesystem (ext3) to report right.
> 
> Now that we are backing up all of the data (at 400 Mbps) over network, I want to know if "df" reporting can be fixed easily or should I do new array creation and restore data from backup.
> 
> We are ordering a new RAID card, just to be on safer side.
> 
> Sundar
> 
> ________________________________________
> From: NeilBrown [neilb@suse.de]
> Sent: Wednesday, March 28, 2012 7:27 PM
> To: Paramasivam, Meenakshisundaram
> Cc: linux-raid@vger.kernel.org
> Subject: Re: mdadm raid6 recovery status
> 
> On Wed, 28 Mar 2012 12:49:18 +0000 "Paramasivam, Meenakshisundaram"
> <mparamas@iupui.edu> wrote:
> 
> > [root@in-rady-neuro9 ~]# df -kl /myarray
> > Filesystem           1K-blocks      Used Available Use% Mounted on
> > /dev/md2             11537161976    162432 10950945196   1% /myarray
> > Should be 7TB of used space.
> 
> This is bad.  Something has happened to your filesystem.
> It is almost as though someone ran "mkfs" on the array.
> I don't know much about recovery after such an action, but I doubt you
> will get much back.
> 
> >
> > [root@in-rady-neuro9 ~]# cat /proc/partitions
> > major minor  #blocks  name
> >
> >    8        0  438960128 sda
> >    8        1     512000 sda1
> >    8        2   51200000 sda2
> >    8        3  387247104 sda3
> >    8       16 1953514584 sdb
> >    8       32 1953514584 sdc
> >    8       48 1953514584 sdd
> >    8       64 1953514584 sde
> >    8       80 1953514584 sdf
> >    8       96 1953514584 sdg
> >    8      112 1953514584 sdh
> >    8      128 1953514584 sdi
> >  253        0  346226688 dm-0
> >  253        1   40992768 dm-1
> 
> No md2 ???
> 
> >
> > sd[b-i] are raid devices
> >
> > [root@in-rady-neuro9 ~]# mdadm --detail /dev/md2
> > /dev/md2:
> >         Version : 0.90
> >   Creation Time : Fri Dec 16 17:56:14 2011
> >      Raid Level : raid6
> >      Array Size : 11721086976 (11178.10 GiB 12002.39 GB)
> >   Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)  <<<====== Wrong! Should be 7TB of used array space.
> 
> "Used Dev Size" isn't "how much of the array is used by the filesystem" -
> mdadm doesn't know anything about filesystems.
> It is "How much of each individual device is used by the array", which is
> usually a little less than the size of the smallest device.
> So 2TB is correct here.
> 
> 
> NeilBrown
> 
> 


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: mdadm raid6 recovery status
  2012-03-29 19:41           ` NeilBrown
@ 2012-03-30 21:22             ` Paramasivam, Meenakshisundaram
  2012-03-30 22:51               ` NeilBrown
  0 siblings, 1 reply; 10+ messages in thread
From: Paramasivam, Meenakshisundaram @ 2012-03-30 21:22 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid


Thanks. Is it safe to fsck -n (and then fsck -y) on /dev/md2 when sdg is just added to md2 and is in "spare rebulding" status and recovery is only at 4% completed? BTW, we got all of the data backed up.

Background:
When we assembled md2 we forced rest of the drives and did not include this drive. After 24 hrs, when I tried to assemble all of the drives into md2, I got "md: kicking non-fresh sdg from array!" message through dmesg, and was removed from md2 (through mdadm --detail). I just did:
# mdadm /dev/md2 --add /dev/sdg
mdadm: re-added /dev/sdg

Sundar

________________________________________
From: NeilBrown [neilb@suse.de]
Sent: Thursday, March 29, 2012 3:41 PM
To: Paramasivam, Meenakshisundaram
Cc: linux-raid@vger.kernel.org
Subject: Re: mdadm raid6 recovery status

On Thu, 29 Mar 2012 18:47:14 +0000 "Paramasivam, Meenakshisundaram"
<mparamas@iupui.edu> wrote:

>
> Clarification:
> >>should I do new array creation
> I meant running newfs on assembled 12 TB array, and restore data from backup, to resolve "df" reporting problem.

I would suggest asking on
    linux-ext4@vger.kernel.org

be sure to give lots of details - kernel version etc.
It would be worth running
   fsck -n /dev/md2
first and see if it reports anything strange.
Maybe  just a fsck will fix it.

NeilBrown


>
> ________________________________________
> From: Paramasivam, Meenakshisundaram
> Sent: Thursday, March 29, 2012 1:33 PM
> To: NeilBrown
> Cc: linux-raid@vger.kernel.org
> Subject: RE: mdadm raid6 recovery status
>
> Good news: Got ALL of our data back. [Actually it was 4.96TB not 7TB].
>                      mdadm is a good one.
>
> Bad news: "df" is reporting wrong, while "du" is showing full size.
> # df -kl /myarray
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/md2             11537161976    162432 10950945196   1% /myarray
> # du -sk /myarray
> 5326133556      /myarray
> #
>
> I never looked into du or looked in depth of the files & folders and simply got mislead by reported "df" usage; data was there all along. We definitely want "df" for the array's filesystem (ext3) to report right.
>
> Now that we are backing up all of the data (at 400 Mbps) over network, I want to know if "df" reporting can be fixed easily or should I do new array creation and restore data from backup.
>
> We are ordering a new RAID card, just to be on safer side.
>
> Sundar
>
> ________________________________________
> From: NeilBrown [neilb@suse.de]
> Sent: Wednesday, March 28, 2012 7:27 PM
> To: Paramasivam, Meenakshisundaram
> Cc: linux-raid@vger.kernel.org
> Subject: Re: mdadm raid6 recovery status
>
> On Wed, 28 Mar 2012 12:49:18 +0000 "Paramasivam, Meenakshisundaram"
> <mparamas@iupui.edu> wrote:
>
> > [root@in-rady-neuro9 ~]# df -kl /myarray
> > Filesystem           1K-blocks      Used Available Use% Mounted on
> > /dev/md2             11537161976    162432 10950945196   1% /myarray
> > Should be 7TB of used space.
>
> This is bad.  Something has happened to your filesystem.
> It is almost as though someone ran "mkfs" on the array.
> I don't know much about recovery after such an action, but I doubt you
> will get much back.
>
> >
> > [root@in-rady-neuro9 ~]# cat /proc/partitions
> > major minor  #blocks  name
> >
> >    8        0  438960128 sda
> >    8        1     512000 sda1
> >    8        2   51200000 sda2
> >    8        3  387247104 sda3
> >    8       16 1953514584 sdb
> >    8       32 1953514584 sdc
> >    8       48 1953514584 sdd
> >    8       64 1953514584 sde
> >    8       80 1953514584 sdf
> >    8       96 1953514584 sdg
> >    8      112 1953514584 sdh
> >    8      128 1953514584 sdi
> >  253        0  346226688 dm-0
> >  253        1   40992768 dm-1
>
> No md2 ???
>
> >
> > sd[b-i] are raid devices
> >
> > [root@in-rady-neuro9 ~]# mdadm --detail /dev/md2
> > /dev/md2:
> >         Version : 0.90
> >   Creation Time : Fri Dec 16 17:56:14 2011
> >      Raid Level : raid6
> >      Array Size : 11721086976 (11178.10 GiB 12002.39 GB)
> >   Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)  <<<====== Wrong! Should be 7TB of used array space.
>
> "Used Dev Size" isn't "how much of the array is used by the filesystem" -
> mdadm doesn't know anything about filesystems.
> It is "How much of each individual device is used by the array", which is
> usually a little less than the size of the smallest device.
> So 2TB is correct here.
>
>
> NeilBrown
>
>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: mdadm raid6 recovery status
  2012-03-30 21:22             ` Paramasivam, Meenakshisundaram
@ 2012-03-30 22:51               ` NeilBrown
  0 siblings, 0 replies; 10+ messages in thread
From: NeilBrown @ 2012-03-30 22:51 UTC (permalink / raw)
  To: Paramasivam, Meenakshisundaram; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 5319 bytes --]

On Fri, 30 Mar 2012 21:22:34 +0000 "Paramasivam, Meenakshisundaram"
<mparamas@iupui.edu> wrote:

> 
> Thanks. Is it safe to fsck -n (and then fsck -y) on /dev/md2 when sdg is just added to md2 and is in "spare rebulding" status and recovery is only at 4% completed? BTW, we got all of the data backed up.

It is certainly safe to "fsck -n" while there is a spare rebuilding.  Each
process will slow down the other but that shouldn't be a problem.
I would never say that "fsck -y" is safe without seeing the output for "fsck
-n" however the rebuilding the spare should affect the safety of "fsck -y".

And it is always nice to hear that people have their data safe -  thanks.

NeilBrown


> 
> Background:
> When we assembled md2 we forced rest of the drives and did not include this drive. After 24 hrs, when I tried to assemble all of the drives into md2, I got "md: kicking non-fresh sdg from array!" message through dmesg, and was removed from md2 (through mdadm --detail). I just did:
> # mdadm /dev/md2 --add /dev/sdg
> mdadm: re-added /dev/sdg
> 
> Sundar
> 
> ________________________________________
> From: NeilBrown [neilb@suse.de]
> Sent: Thursday, March 29, 2012 3:41 PM
> To: Paramasivam, Meenakshisundaram
> Cc: linux-raid@vger.kernel.org
> Subject: Re: mdadm raid6 recovery status
> 
> On Thu, 29 Mar 2012 18:47:14 +0000 "Paramasivam, Meenakshisundaram"
> <mparamas@iupui.edu> wrote:
> 
> >
> > Clarification:
> > >>should I do new array creation
> > I meant running newfs on assembled 12 TB array, and restore data from backup, to resolve "df" reporting problem.
> 
> I would suggest asking on
>     linux-ext4@vger.kernel.org
> 
> be sure to give lots of details - kernel version etc.
> It would be worth running
>    fsck -n /dev/md2
> first and see if it reports anything strange.
> Maybe  just a fsck will fix it.
> 
> NeilBrown
> 
> 
> >
> > ________________________________________
> > From: Paramasivam, Meenakshisundaram
> > Sent: Thursday, March 29, 2012 1:33 PM
> > To: NeilBrown
> > Cc: linux-raid@vger.kernel.org
> > Subject: RE: mdadm raid6 recovery status
> >
> > Good news: Got ALL of our data back. [Actually it was 4.96TB not 7TB].
> >                      mdadm is a good one.
> >
> > Bad news: "df" is reporting wrong, while "du" is showing full size.
> > # df -kl /myarray
> > Filesystem           1K-blocks      Used Available Use% Mounted on
> > /dev/md2             11537161976    162432 10950945196   1% /myarray
> > # du -sk /myarray
> > 5326133556      /myarray
> > #
> >
> > I never looked into du or looked in depth of the files & folders and simply got mislead by reported "df" usage; data was there all along. We definitely want "df" for the array's filesystem (ext3) to report right.
> >
> > Now that we are backing up all of the data (at 400 Mbps) over network, I want to know if "df" reporting can be fixed easily or should I do new array creation and restore data from backup.
> >
> > We are ordering a new RAID card, just to be on safer side.
> >
> > Sundar
> >
> > ________________________________________
> > From: NeilBrown [neilb@suse.de]
> > Sent: Wednesday, March 28, 2012 7:27 PM
> > To: Paramasivam, Meenakshisundaram
> > Cc: linux-raid@vger.kernel.org
> > Subject: Re: mdadm raid6 recovery status
> >
> > On Wed, 28 Mar 2012 12:49:18 +0000 "Paramasivam, Meenakshisundaram"
> > <mparamas@iupui.edu> wrote:
> >
> > > [root@in-rady-neuro9 ~]# df -kl /myarray
> > > Filesystem           1K-blocks      Used Available Use% Mounted on
> > > /dev/md2             11537161976    162432 10950945196   1% /myarray
> > > Should be 7TB of used space.
> >
> > This is bad.  Something has happened to your filesystem.
> > It is almost as though someone ran "mkfs" on the array.
> > I don't know much about recovery after such an action, but I doubt you
> > will get much back.
> >
> > >
> > > [root@in-rady-neuro9 ~]# cat /proc/partitions
> > > major minor  #blocks  name
> > >
> > >    8        0  438960128 sda
> > >    8        1     512000 sda1
> > >    8        2   51200000 sda2
> > >    8        3  387247104 sda3
> > >    8       16 1953514584 sdb
> > >    8       32 1953514584 sdc
> > >    8       48 1953514584 sdd
> > >    8       64 1953514584 sde
> > >    8       80 1953514584 sdf
> > >    8       96 1953514584 sdg
> > >    8      112 1953514584 sdh
> > >    8      128 1953514584 sdi
> > >  253        0  346226688 dm-0
> > >  253        1   40992768 dm-1
> >
> > No md2 ???
> >
> > >
> > > sd[b-i] are raid devices
> > >
> > > [root@in-rady-neuro9 ~]# mdadm --detail /dev/md2
> > > /dev/md2:
> > >         Version : 0.90
> > >   Creation Time : Fri Dec 16 17:56:14 2011
> > >      Raid Level : raid6
> > >      Array Size : 11721086976 (11178.10 GiB 12002.39 GB)
> > >   Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)  <<<====== Wrong! Should be 7TB of used array space.
> >
> > "Used Dev Size" isn't "how much of the array is used by the filesystem" -
> > mdadm doesn't know anything about filesystems.
> > It is "How much of each individual device is used by the array", which is
> > usually a little less than the size of the smallest device.
> > So 2TB is correct here.
> >
> >
> > NeilBrown
> >
> >
> 


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-03-30 22:51 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-03-28  2:22 mdadm raid6 recovery status Paramasivam, Meenakshisundaram
2012-03-28  4:11 ` NeilBrown
2012-03-28 12:49   ` Paramasivam, Meenakshisundaram
2012-03-28 19:39     ` John Robinson
2012-03-28 23:27     ` NeilBrown
2012-03-29 17:33       ` Paramasivam, Meenakshisundaram
2012-03-29 18:47         ` Paramasivam, Meenakshisundaram
2012-03-29 19:41           ` NeilBrown
2012-03-30 21:22             ` Paramasivam, Meenakshisundaram
2012-03-30 22:51               ` NeilBrown

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.