All of lore.kernel.org
 help / color / mirror / Atom feed
* help please, can't mount/recover raid 5 array
@ 2013-02-09 21:03 Daniel Sanabria
  2013-02-09 23:00 ` Dave Cundiff
  2013-02-10  6:29 ` Mikael Abrahamsson
  0 siblings, 2 replies; 10+ messages in thread
From: Daniel Sanabria @ 2013-02-09 21:03 UTC (permalink / raw)
  To: linux-raid

Hi,

I'm having issues with my raid 5 array after upgrading my os and I
have to say I'm desperate :-(

whenever I try to mount the array I get the following:

[root@lamachine ~]# mount /mnt/raid/
mount: /dev/sda3 is already mounted or /mnt/raid busy
[root@lamachine ~]#

and the messages log is recording the following:

Feb  9 20:25:10 lamachine kernel: [ 3887.287305] EXT4-fs (md2): VFS:
Can't find ext4 filesystem
Feb  9 20:25:10 lamachine kernel: [ 3887.304025] EXT4-fs (md2): VFS:
Can't find ext4 filesystem
Feb  9 20:25:10 lamachine kernel: [ 3887.320702] EXT4-fs (md2): VFS:
Can't find ext4 filesystem
Feb  9 20:25:10 lamachine kernel: [ 3887.353233] ISOFS: Unable to
identify CD-ROM format.
Feb  9 20:25:10 lamachine kernel: [ 3887.353571] FAT-fs (md2): invalid
media value (0x82)
Feb  9 20:25:10 lamachine kernel: [ 3887.368809] FAT-fs (md2): Can't
find a valid FAT filesystem
Feb  9 20:25:10 lamachine kernel: [ 3887.369140] hfs: can't find a HFS
filesystem on dev md2.
Feb  9 20:25:10 lamachine kernel: [ 3887.369665] hfs: unable to find
HFS+ superblock

/etc/fstab is as follows:

[root@lamachine ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Fri Feb  8 17:33:14 2013
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_bigblackbox-LogVol_root /                       ext4
defaults        1 1
UUID=7bee0f50-3e23-4a5b-bfb5-42006d6c8561 /boot                   ext4
   defaults        1 2
UUID=48be851b-f021-0b64-e9fb-efdf24c84c5f /mnt/raid ext4 defaults 1 2
/dev/mapper/vg_bigblackbox-LogVol_opt /opt                    ext4
defaults        1 2
/dev/mapper/vg_bigblackbox-LogVol_tmp /tmp                    ext4
defaults        1 2
/dev/mapper/vg_bigblackbox-LogVol_var /var                    ext4
defaults        1 2
UUID=70933ff3-8ed0-4486-abf1-01f00023d1b2 swap                    swap
   defaults        0 0
[root@lamachine ~]#

After the upgrade I had to assemble the array manually and didn't get
any errors but I was still getting the mount problem. I went ahead and
recreated it with mdadm --create --assume-clean and still the smae result.

here's some more info about md2:
[root@lamachine ~]# mdadm --misc --detail /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Sat Feb  9 17:30:32 2013
     Raid Level : raid5
     Array Size : 511996928 (488.28 GiB 524.28 GB)
  Used Dev Size : 255998464 (244.14 GiB 262.14 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Sat Feb  9 20:47:46 2013
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : lamachine:2  (local to host lamachine)
           UUID : 48be851b:f0210b64:e9fbefdf:24c84c5f
         Events : 2

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
[root@lamachine ~]#

it looks like it know about how much space is being used which might
indicate that the data is still there?

what can I do to recover the data?

Any  help or guidance is more than welcome.

Thanks in advance,

Dan

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: help please, can't mount/recover raid 5 array
  2013-02-09 21:03 help please, can't mount/recover raid 5 array Daniel Sanabria
@ 2013-02-09 23:00 ` Dave Cundiff
  2013-02-10  9:17   ` Daniel Sanabria
  2013-02-10  6:29 ` Mikael Abrahamsson
  1 sibling, 1 reply; 10+ messages in thread
From: Dave Cundiff @ 2013-02-09 23:00 UTC (permalink / raw)
  To: Daniel Sanabria; +Cc: linux-raid

On Sat, Feb 9, 2013 at 4:03 PM, Daniel Sanabria <sanabria.d@gmail.com> wrote:
> Hi,
>
> I'm having issues with my raid 5 array after upgrading my os and I
> have to say I'm desperate :-(
>
> whenever I try to mount the array I get the following:
>
> [root@lamachine ~]# mount /mnt/raid/
> mount: /dev/sda3 is already mounted or /mnt/raid busy
> [root@lamachine ~]#
>
> and the messages log is recording the following:
>
> Feb  9 20:25:10 lamachine kernel: [ 3887.287305] EXT4-fs (md2): VFS:
> Can't find ext4 filesystem
> Feb  9 20:25:10 lamachine kernel: [ 3887.304025] EXT4-fs (md2): VFS:
> Can't find ext4 filesystem
> Feb  9 20:25:10 lamachine kernel: [ 3887.320702] EXT4-fs (md2): VFS:
> Can't find ext4 filesystem
> Feb  9 20:25:10 lamachine kernel: [ 3887.353233] ISOFS: Unable to
> identify CD-ROM format.
> Feb  9 20:25:10 lamachine kernel: [ 3887.353571] FAT-fs (md2): invalid
> media value (0x82)
> Feb  9 20:25:10 lamachine kernel: [ 3887.368809] FAT-fs (md2): Can't
> find a valid FAT filesystem
> Feb  9 20:25:10 lamachine kernel: [ 3887.369140] hfs: can't find a HFS
> filesystem on dev md2.
> Feb  9 20:25:10 lamachine kernel: [ 3887.369665] hfs: unable to find
> HFS+ superblock
>
> /etc/fstab is as follows:
>
> [root@lamachine ~]# cat /etc/fstab
>
> #
> # /etc/fstab
> # Created by anaconda on Fri Feb  8 17:33:14 2013
> #
> # Accessible filesystems, by reference, are maintained under '/dev/disk'
> # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
> #
> /dev/mapper/vg_bigblackbox-LogVol_root /                       ext4
> defaults        1 1
> UUID=7bee0f50-3e23-4a5b-bfb5-42006d6c8561 /boot                   ext4
>    defaults        1 2
> UUID=48be851b-f021-0b64-e9fb-efdf24c84c5f /mnt/raid ext4 defaults 1 2
> /dev/mapper/vg_bigblackbox-LogVol_opt /opt                    ext4
> defaults        1 2
> /dev/mapper/vg_bigblackbox-LogVol_tmp /tmp                    ext4
> defaults        1 2
> /dev/mapper/vg_bigblackbox-LogVol_var /var                    ext4
> defaults        1 2
> UUID=70933ff3-8ed0-4486-abf1-01f00023d1b2 swap                    swap
>    defaults        0 0
> [root@lamachine ~]#
>
> After the upgrade I had to assemble the array manually and didn't get
> any errors but I was still getting the mount problem. I went ahead and
> recreated it with mdadm --create --assume-clean and still the smae result.
>
> here's some more info about md2:
> [root@lamachine ~]# mdadm --misc --detail /dev/md2
> /dev/md2:
>         Version : 1.2
>   Creation Time : Sat Feb  9 17:30:32 2013
>      Raid Level : raid5
>      Array Size : 511996928 (488.28 GiB 524.28 GB)
>   Used Dev Size : 255998464 (244.14 GiB 262.14 GB)
>    Raid Devices : 3
>   Total Devices : 3
>     Persistence : Superblock is persistent
>
>     Update Time : Sat Feb  9 20:47:46 2013
>           State : clean
>  Active Devices : 3
> Working Devices : 3
>  Failed Devices : 0
>   Spare Devices : 0
>
>          Layout : left-symmetric
>      Chunk Size : 512K
>
>            Name : lamachine:2  (local to host lamachine)
>            UUID : 48be851b:f0210b64:e9fbefdf:24c84c5f
>          Events : 2
>
>     Number   Major   Minor   RaidDevice State
>        0       8        3        0      active sync   /dev/sda3
>        1       8       18        1      active sync   /dev/sdb2
>        2       8       34        2      active sync   /dev/sdc2
> [root@lamachine ~]#
>
> it looks like it know about how much space is being used which might
> indicate that the data is still there?
>
> what can I do to recover the data?
>
> Any  help or guidance is more than welcome.
>
> Thanks in advance,
>
> Dan
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

What OS did you upgrade from and to? What OS was the array originally
created on?

Looks like you have LVM on top the of md array so the output of
pvdisplay and vgdisplay would be useful.

Did you specify metadata version when re-creating the array?
Recreating the array at best changed the UUID, and depending on what
OS the array was created on, overwrote the beginning of your
partitions.

--
Dave Cundiff
System Administrator
A2Hosting, Inc
http://www.a2hosting.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: help please, can't mount/recover raid 5 array
  2013-02-09 21:03 help please, can't mount/recover raid 5 array Daniel Sanabria
  2013-02-09 23:00 ` Dave Cundiff
@ 2013-02-10  6:29 ` Mikael Abrahamsson
       [not found]   ` <CAHscji0h5nHUssKi23BMfR=Ek+jSH+vK0odYNWkzrVDf6t18mw@mail.gmail.com>
  1 sibling, 1 reply; 10+ messages in thread
From: Mikael Abrahamsson @ 2013-02-10  6:29 UTC (permalink / raw)
  To: linux-raid

On Sat, 9 Feb 2013, Daniel Sanabria wrote:

> After the upgrade I had to assemble the array manually and didn't get 
> any errors but I was still getting the mount problem. I went ahead and 
> recreated it with mdadm --create --assume-clean and still the smae 
> result.

Did you save mdadm --examine from the drives *before* you did this?

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: help please, can't mount/recover raid 5 array
  2013-02-09 23:00 ` Dave Cundiff
@ 2013-02-10  9:17   ` Daniel Sanabria
  0 siblings, 0 replies; 10+ messages in thread
From: Daniel Sanabria @ 2013-02-10  9:17 UTC (permalink / raw)
  To: Dave Cundiff; +Cc: linux-raid

Hi Dave,

The upgrade was from fedora 16 to fedora 17 and I think the array was
created on F16 or F15. I didn't specify the metadata version when
recreating :(

Here's the output of pvdisplay and vgdisplay but I don't think I was
using LVM here (i know this from an output of an old kickstart the
anaconda-ks.cfg on f16):

[root@lamachine ~]#

[root@lamachine ~]# pvdisplay -v
    Scanning for physical volume names
  --- Physical volume ---
  PV Name               /dev/md127
  VG Name               libvirt_lvm
  PV Size               90.00 GiB / not usable 3.50 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              23038
  Free PE               5630
  Allocated PE          17408
  PV UUID               VmsWRd-8qHt-bauf-lvAn-FC97-KyH5-gk89ox

  --- Physical volume ---
  PV Name               /dev/md126
  VG Name               vg_bigblackbox
  PV Size               29.30 GiB / not usable 3.94 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              7499
  Free PE               1499
  Allocated PE          6000
  PV UUID               cE4ePh-RWO8-Wgdy-YPOY-ehyC-KI6u-io1cyH

[root@lamachine ~]# vgdisplay -v
    Finding all volume groups
    Finding volume group "libvirt_lvm"
  --- Volume group ---
  VG Name               libvirt_lvm
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  8
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                5
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               89.99 GiB
  PE Size               4.00 MiB
  Total PE              23038
  Alloc PE / Size       17408 / 68.00 GiB
  Free  PE / Size       5630 / 21.99 GiB
  VG UUID               t8GQck-f2Eu-iD2V-fnJQ-kBm6-QyKw-dR31PB

  --- Logical volume ---
  LV Path                /dev/libvirt_lvm/win7
  LV Name                win7
  VG Name                libvirt_lvm
  LV UUID                uJaz2L-jhCy-kOU2-klnM-i6P7-I13O-5D1u3d
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 0
  LV Size                25.00 GiB
  Current LE             6400
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     6144
  Block device           253:1

  --- Logical volume ---
  LV Path                /dev/libvirt_lvm/cms_test
  LV Name                cms_test
  VG Name                libvirt_lvm
  LV UUID                ix5PwP-Wket-9rAe-foq3-8hJY-jfVL-haCU6a
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 0
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     6144
  Block device           253:2

  --- Logical volume ---
  LV Path                /dev/libvirt_lvm/centos_updt
  LV Name                centos_updt
  VG Name                libvirt_lvm
  LV UUID                vp1nAZ-jZmX-BqMb-fuEL-kkto-1d6X-a15ecI
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 0
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     6144
  Block device           253:3

  --- Logical volume ---
  LV Path                /dev/libvirt_lvm/cms
  LV Name                cms
  VG Name                libvirt_lvm
  LV UUID                gInAgv-7LAQ-djtZ-Oc6P-xRME-dHU4-Wj885d
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 0
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     6144
  Block device           253:4

  --- Logical volume ---
  LV Path                /dev/libvirt_lvm/litp
  LV Name                litp
  VG Name                libvirt_lvm
  LV UUID                dbev0d-b7Tx-WXro-fMvN-dcm6-SH5N-ylIdlS
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 0
  LV Size                19.00 GiB
  Current LE             4864
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     6144
  Block device           253:5

  --- Physical volumes ---
  PV Name               /dev/md127
  PV UUID               VmsWRd-8qHt-bauf-lvAn-FC97-KyH5-gk89ox
  PV Status             allocatable
  Total PE / Free PE    23038 / 5630

    Finding volume group "vg_bigblackbox"
  --- Volume group ---
  VG Name               vg_bigblackbox
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                4
  Open LV               4
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               29.29 GiB
  PE Size               4.00 MiB
  Total PE              7499
  Alloc PE / Size       6000 / 23.44 GiB
  Free  PE / Size       1499 / 5.86 GiB
  VG UUID               VWfuwI-5v2q-w8qf-FEbc-BdGW-3mKX-pZd7hR

  --- Logical volume ---
  LV Path                /dev/vg_bigblackbox/LogVol_var
  LV Name                LogVol_var
  VG Name                vg_bigblackbox
  LV UUID                1NJcwG-01B4-6CSY-eijZ-bEES-Rcqd-tTM3ig
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 1
  LV Size                3.91 GiB
  Current LE             1000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6

  --- Logical volume ---
  LV Path                /dev/vg_bigblackbox/LogVol_root
  LV Name                LogVol_root
  VG Name                vg_bigblackbox
  LV UUID                VTBWT0-OdxR-R5bG-ZiTV-oZAp-8KX0-s9ziS8
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 1
  LV Size                9.77 GiB
  Current LE             2500
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/vg_bigblackbox/LogVol_opt
  LV Name                LogVol_opt
  VG Name                vg_bigblackbox
  LV UUID                x8kbeS-erIn-X1oJ-5oXp-H2AK-HHHQ-Z3GnB1
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 1
  LV Size                7.81 GiB
  Current LE             2000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7

  --- Logical volume ---
  LV Path                /dev/vg_bigblackbox/LogVol_tmp
  LV Name                LogVol_tmp
  VG Name                vg_bigblackbox
  LV UUID                j8A2Rv-KNo9-MmBV-WMEw-snIu-cfWU-HXkvnM
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 1
  LV Size                1.95 GiB
  Current LE             500
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:8

  --- Physical volumes ---
  PV Name               /dev/md126
  PV UUID               cE4ePh-RWO8-Wgdy-YPOY-ehyC-KI6u-io1cyH
  PV Status             allocatable
  Total PE / Free PE    7499 / 1499

[root@lamachine ~]#

here's the output of the old kickstart file:

$ cat anaconda-ks.cfg
# Kickstart file automatically generated by anaconda.

#version=DEVEL
install
lang en_US.UTF-8
keyboard uk
network --onboot yes --device p20p1 --bootproto dhcp --noipv6
--hostname lamachine
timezone --utc Europe/London
rootpw  --iscrypted
$6$Ue9iCKeAVqBBTb24$mZFg.v4BjFAM/gD8FOaZBPTu.7PLixoZNWVsa6L65eHl1aON3m.CmTB7ni1gnuH7KqUzG2UPmCOyPEocdByh.1
selinux --enforcing
authconfig --enableshadow --passalgo=sha512
firewall --service=ssh
# The following is the partition information you requested
# Note that any partitions you deleted are not expressed
# here so unless you clear all partitions first, this is
# not guaranteed to work
#clearpart --none



#part  --onpart=sdc6 --noformat
#part raid.008037 --onpart=sdc5 --noformat
#part raid.008034 --onpart=sdc2 --noformat
#part raid.008033 --onpart=sdc1 --noformat


#part  --onpart=sdb6 --noformat
#part raid.008021 --onpart=sdb5 --noformat
#part swap --onpart=sdb3 --noformat
#part raid.008018 --onpart=sdb2 --noformat


#part  --onpart=sda6 --noformat
#part raid.008005 --onpart=sda5 --noformat
#raid pv.009003 --level=0 --device=md3 --useexisting --noformat
raid.008005 raid.008021 raid.008037
#volgroup libvirt_lvm --pesize=4096 --useexisting --noformat pv.009003
#logvol  --name=win7 --vgname=libvirt_lvm --useexisting --noformat
#logvol  --name=litp --vgname=libvirt_lvm --useexisting --noformat
#logvol  --name=cms_test --vgname=libvirt_lvm --useexisting --noformat
#logvol  --name=cms --vgname=libvirt_lvm --useexisting --noformat
#logvol  --name=centos_updt --vgname=libvirt_lvm --useexisting --noformat
#part raid.008003 --onpart=sda3 --noformat
#raid /home --fstype=ext4 --level=5 --device=md2 --useexisting
--noformat raid.008003 raid.008018 raid.008034
#part raid.008002 --onpart=sda2 --noformat
#raid pv.009001 --level=10 --device=md1 --useexisting --noformat
raid.008002 raid.008033
#volgroup vg_bigblackbox --pesize=4096 --useexisting --noformat pv.009001
#logvol /var --fstype=ext4 --name=LogVol_var --vgname=vg_bigblackbox
--useexisting
#logvol /tmp --fstype=ext4 --name=LogVol_tmp --vgname=vg_bigblackbox
--useexisting
#logvol / --fstype=ext4 --name=LogVol_root --vgname=vg_bigblackbox --useexisting
#logvol /opt --fstype=ext4 --name=LogVol_opt --vgname=vg_bigblackbox
--useexisting
#part /boot --fstype=ext4 --onpart=sda1








bootloader --location=mbr --timeout=5 --driveorder=sda,sdb,sdc
--append="nomodeset quiet rhgb"
repo --name="Fedora 16 - x86_64"
--baseurl=http://mirror.bytemark.co.uk/fedora/linux/releases/16/Everything/x86_64/os/
--cost=1000
repo --name="Fedora 16 - x86_64 - Updates"
--baseurl=http://mirror.bytemark.co.uk/fedora/linux/updates/16/x86_64/
--cost=1000

%packages
@core
@online-docs
@virtualization
python-libguestfs
virt-top
libguestfs-tools
guestfs-browser
%end
$

Regards,

Daniel

On 9 February 2013 23:00, Dave Cundiff <syshackmin@gmail.com> wrote:
> On Sat, Feb 9, 2013 at 4:03 PM, Daniel Sanabria <sanabria.d@gmail.com> wrote:
>> Hi,
>>
>> I'm having issues with my raid 5 array after upgrading my os and I
>> have to say I'm desperate :-(
>>
>> whenever I try to mount the array I get the following:
>>
>> [root@lamachine ~]# mount /mnt/raid/
>> mount: /dev/sda3 is already mounted or /mnt/raid busy
>> [root@lamachine ~]#
>>
>> and the messages log is recording the following:
>>
>> Feb  9 20:25:10 lamachine kernel: [ 3887.287305] EXT4-fs (md2): VFS:
>> Can't find ext4 filesystem
>> Feb  9 20:25:10 lamachine kernel: [ 3887.304025] EXT4-fs (md2): VFS:
>> Can't find ext4 filesystem
>> Feb  9 20:25:10 lamachine kernel: [ 3887.320702] EXT4-fs (md2): VFS:
>> Can't find ext4 filesystem
>> Feb  9 20:25:10 lamachine kernel: [ 3887.353233] ISOFS: Unable to
>> identify CD-ROM format.
>> Feb  9 20:25:10 lamachine kernel: [ 3887.353571] FAT-fs (md2): invalid
>> media value (0x82)
>> Feb  9 20:25:10 lamachine kernel: [ 3887.368809] FAT-fs (md2): Can't
>> find a valid FAT filesystem
>> Feb  9 20:25:10 lamachine kernel: [ 3887.369140] hfs: can't find a HFS
>> filesystem on dev md2.
>> Feb  9 20:25:10 lamachine kernel: [ 3887.369665] hfs: unable to find
>> HFS+ superblock
>>
>> /etc/fstab is as follows:
>>
>> [root@lamachine ~]# cat /etc/fstab
>>
>> #
>> # /etc/fstab
>> # Created by anaconda on Fri Feb  8 17:33:14 2013
>> #
>> # Accessible filesystems, by reference, are maintained under '/dev/disk'
>> # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
>> #
>> /dev/mapper/vg_bigblackbox-LogVol_root /                       ext4
>> defaults        1 1
>> UUID=7bee0f50-3e23-4a5b-bfb5-42006d6c8561 /boot                   ext4
>>    defaults        1 2
>> UUID=48be851b-f021-0b64-e9fb-efdf24c84c5f /mnt/raid ext4 defaults 1 2
>> /dev/mapper/vg_bigblackbox-LogVol_opt /opt                    ext4
>> defaults        1 2
>> /dev/mapper/vg_bigblackbox-LogVol_tmp /tmp                    ext4
>> defaults        1 2
>> /dev/mapper/vg_bigblackbox-LogVol_var /var                    ext4
>> defaults        1 2
>> UUID=70933ff3-8ed0-4486-abf1-01f00023d1b2 swap                    swap
>>    defaults        0 0
>> [root@lamachine ~]#
>>
>> After the upgrade I had to assemble the array manually and didn't get
>> any errors but I was still getting the mount problem. I went ahead and
>> recreated it with mdadm --create --assume-clean and still the smae result.
>>
>> here's some more info about md2:
>> [root@lamachine ~]# mdadm --misc --detail /dev/md2
>> /dev/md2:
>>         Version : 1.2
>>   Creation Time : Sat Feb  9 17:30:32 2013
>>      Raid Level : raid5
>>      Array Size : 511996928 (488.28 GiB 524.28 GB)
>>   Used Dev Size : 255998464 (244.14 GiB 262.14 GB)
>>    Raid Devices : 3
>>   Total Devices : 3
>>     Persistence : Superblock is persistent
>>
>>     Update Time : Sat Feb  9 20:47:46 2013
>>           State : clean
>>  Active Devices : 3
>> Working Devices : 3
>>  Failed Devices : 0
>>   Spare Devices : 0
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>            Name : lamachine:2  (local to host lamachine)
>>            UUID : 48be851b:f0210b64:e9fbefdf:24c84c5f
>>          Events : 2
>>
>>     Number   Major   Minor   RaidDevice State
>>        0       8        3        0      active sync   /dev/sda3
>>        1       8       18        1      active sync   /dev/sdb2
>>        2       8       34        2      active sync   /dev/sdc2
>> [root@lamachine ~]#
>>
>> it looks like it know about how much space is being used which might
>> indicate that the data is still there?
>>
>> what can I do to recover the data?
>>
>> Any  help or guidance is more than welcome.
>>
>> Thanks in advance,
>>
>> Dan
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> What OS did you upgrade from and to? What OS was the array originally
> created on?
>
> Looks like you have LVM on top the of md array so the output of
> pvdisplay and vgdisplay would be useful.
>
> Did you specify metadata version when re-creating the array?
> Recreating the array at best changed the UUID, and depending on what
> OS the array was created on, overwrote the beginning of your
> partitions.
>
> --
> Dave Cundiff
> System Administrator
> A2Hosting, Inc
> http://www.a2hosting.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: help please, can't mount/recover raid 5 array
       [not found]   ` <CAHscji0h5nHUssKi23BMfR=Ek+jSH+vK0odYNWkzrVDf6t18mw@mail.gmail.com>
@ 2013-02-10  9:36     ` Daniel Sanabria
  2013-02-10 21:05       ` Phil Turmel
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Sanabria @ 2013-02-10  9:36 UTC (permalink / raw)
  To: Mikael Abrahamsson, linux-raid

to the whole list this time ...

Thanks,

Daniel

On 10 February 2013 09:17, Daniel Sanabria <sanabria.d@gmail.com> wrote:
> Hi Mikael,
>
> Yes I did. Here it is:
>
> [root@lamachine ~]# mdadm --examine /dev/sd*
> /dev/sda:
>    MBR Magic : aa55
> Partition[0] :       407552 sectors at         2048 (type 83)
> Partition[1] :     61440000 sectors at       409663 (type fd)
> Partition[2] :    512000000 sectors at     61849663 (type fd)
> Partition[3] :    402918402 sectors at    573849663 (type 05)
> mdadm: No md superblock detected on /dev/sda1.
> /dev/sda2:
>           Magic : a92b4efc
>         Version : 0.90.00
>            UUID : 9af006ca:8845bbd3:bfe78010:bc810f04
>   Creation Time : Thu Dec  3 22:12:12 2009
>      Raid Level : raid10
>   Used Dev Size : 30719936 (29.30 GiB 31.46 GB)
>      Array Size : 30719936 (29.30 GiB 31.46 GB)
>    Raid Devices : 2
>   Total Devices : 2
> Preferred Minor : 126
>
>     Update Time : Sat Feb  9 17:21:45 2013
>           State : clean
>  Active Devices : 2
> Working Devices : 2
>  Failed Devices : 0
>   Spare Devices : 0
>        Checksum : e6d627b3 - correct
>          Events : 263120
>
>          Layout : near=2
>      Chunk Size : 64K
>
>       Number   Major   Minor   RaidDevice State
> this     0       8        2        0      active sync   /dev/sda2
>
>    0     0       8        2        0      active sync   /dev/sda2
>    1     1       8       33        1      active sync   /dev/sdc1
> /dev/sda3:
>           Magic : a92b4efc
>         Version : 0.90.00
>            UUID : 0deb6f79:aec7ed69:bfe78010:bc810f04
>   Creation Time : Thu Dec  3 22:12:24 2009
>      Raid Level : raid5
>   Used Dev Size : 255999936 (244.14 GiB 262.14 GB)
>      Array Size : 511999872 (488.28 GiB 524.29 GB)
>    Raid Devices : 3
>   Total Devices : 3
> Preferred Minor : 2
>
>     Update Time : Sat Feb  9 16:09:20 2013
>           State : clean
>  Active Devices : 3
> Working Devices : 3
>  Failed Devices : 0
>   Spare Devices : 0
>        Checksum : 8dd157e5 - correct
>          Events : 792552
>
>          Layout : left-symmetric
>      Chunk Size : 64K
>
>       Number   Major   Minor   RaidDevice State
> this     0       8        3        0      active sync   /dev/sda3
>
>    0     0       8        3        0      active sync   /dev/sda3
>    1     1       8       18        1      active sync   /dev/sdb2
>    2     2       8       34        2      active sync   /dev/sdc2
> /dev/sda4:
>    MBR Magic : aa55
> Partition[0] :     62918679 sectors at           63 (type 83)
> Partition[1] :      7116795 sectors at     82453782 (type 05)
> /dev/sda5:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : acd5374f:72628c93:6a906c4b:5f675ce5
>            Name : reading.homeunix.com:3
>   Creation Time : Tue Jul 26 19:00:28 2011
>      Raid Level : raid0
>    Raid Devices : 3
>
>  Avail Dev Size : 62916631 (30.00 GiB 32.21 GB)
>     Data Offset : 2048 sectors
>    Super Offset : 8 sectors
>           State : clean
>     Device UUID : 5778cd64:0bbba183:ef3270a8:41f83aca
>
>     Update Time : Tue Jul 26 19:00:28 2011
>        Checksum : 96003cba - correct
>          Events : 0
>
>      Chunk Size : 512K
>
>    Device Role : Active device 0
>    Array State : AAA ('A' == active, '.' == missing)
> mdadm: No md superblock detected on /dev/sda6.
> /dev/sdb:
>    MBR Magic : aa55
> Partition[1] :    512000000 sectors at       409663 (type fd)
> Partition[2] :     16384000 sectors at    512409663 (type 82)
> Partition[3] :    447974402 sectors at    528793663 (type 05)
> /dev/sdb2:
>           Magic : a92b4efc
>         Version : 0.90.00
>            UUID : 0deb6f79:aec7ed69:bfe78010:bc810f04
>   Creation Time : Thu Dec  3 22:12:24 2009
>      Raid Level : raid5
>   Used Dev Size : 255999936 (244.14 GiB 262.14 GB)
>      Array Size : 511999872 (488.28 GiB 524.29 GB)
>    Raid Devices : 3
>   Total Devices : 3
> Preferred Minor : 2
>
>     Update Time : Sat Feb  9 16:09:20 2013
>           State : clean
>  Active Devices : 3
> Working Devices : 3
>  Failed Devices : 0
>   Spare Devices : 0
>        Checksum : 8dd157f6 - correct
>          Events : 792552
>
>          Layout : left-symmetric
>      Chunk Size : 64K
>
>       Number   Major   Minor   RaidDevice State
> this     1       8       18        1      active sync   /dev/sdb2
>
>    0     0       8        3        0      active sync   /dev/sda3
>    1     1       8       18        1      active sync   /dev/sdb2
>    2     2       8       34        2      active sync   /dev/sdc2
> mdadm: No md superblock detected on /dev/sdb3.
> /dev/sdb4:
>    MBR Magic : aa55
> Partition[0] :     62912354 sectors at           63 (type 83)
> Partition[1] :      7116795 sectors at     82447457 (type 05)
> /dev/sdb5:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : acd5374f:72628c93:6a906c4b:5f675ce5
>            Name : reading.homeunix.com:3
>   Creation Time : Tue Jul 26 19:00:28 2011
>      Raid Level : raid0
>    Raid Devices : 3
>
>  Avail Dev Size : 62910306 (30.00 GiB 32.21 GB)
>     Data Offset : 2048 sectors
>    Super Offset : 8 sectors
>           State : clean
>     Device UUID : 152d0202:64efb3e7:f23658c3:82a239a1
>
>     Update Time : Tue Jul 26 19:00:28 2011
>        Checksum : 892dbb61 - correct
>          Events : 0
>
>      Chunk Size : 512K
>
>    Device Role : Active device 1
>    Array State : AAA ('A' == active, '.' == missing)
> mdadm: No md superblock detected on /dev/sdb6.
> /dev/sdc:
>    MBR Magic : aa55
> Partition[0] :     61440000 sectors at           63 (type fd)
> Partition[1] :    512000000 sectors at     61440063 (type fd)
> Partition[2] :    403328002 sectors at    573440063 (type 05)
> /dev/sdc1:
>           Magic : a92b4efc
>         Version : 0.90.00
>            UUID : 9af006ca:8845bbd3:bfe78010:bc810f04
>   Creation Time : Thu Dec  3 22:12:12 2009
>      Raid Level : raid10
>   Used Dev Size : 30719936 (29.30 GiB 31.46 GB)
>      Array Size : 30719936 (29.30 GiB 31.46 GB)
>    Raid Devices : 2
>   Total Devices : 2
> Preferred Minor : 126
>
>     Update Time : Sat Feb  9 17:21:45 2013
>           State : clean
>  Active Devices : 2
> Working Devices : 2
>  Failed Devices : 0
>   Spare Devices : 0
>        Checksum : e6d627d4 - correct
>          Events : 263120
>
>          Layout : near=2
>      Chunk Size : 64K
>
>       Number   Major   Minor   RaidDevice State
> this     1       8       33        1      active sync   /dev/sdc1
>
>    0     0       8        2        0      active sync   /dev/sda2
>    1     1       8       33        1      active sync   /dev/sdc1
> /dev/sdc2:
>           Magic : a92b4efc
>         Version : 0.90.00
>            UUID : 0deb6f79:aec7ed69:bfe78010:bc810f04
>   Creation Time : Thu Dec  3 22:12:24 2009
>      Raid Level : raid5
>   Used Dev Size : 255999936 (244.14 GiB 262.14 GB)
>      Array Size : 511999872 (488.28 GiB 524.29 GB)
>    Raid Devices : 3
>   Total Devices : 3
> Preferred Minor : 2
>
>     Update Time : Sat Feb  9 16:09:20 2013
>           State : clean
>  Active Devices : 3
> Working Devices : 3
>  Failed Devices : 0
>   Spare Devices : 0
>        Checksum : 8dd15808 - correct
>          Events : 792552
>
>          Layout : left-symmetric
>      Chunk Size : 64K
>
>       Number   Major   Minor   RaidDevice State
> this     2       8       34        2      active sync   /dev/sdc2
>
>    0     0       8        3        0      active sync   /dev/sda3
>    1     1       8       18        1      active sync   /dev/sdb2
>    2     2       8       34        2      active sync   /dev/sdc2
> /dev/sdc3:
>    MBR Magic : aa55
> Partition[0] :     62910589 sectors at           63 (type 83)
> Partition[1] :      7116795 sectors at     82445692 (type 05)
> /dev/sdc5:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : acd5374f:72628c93:6a906c4b:5f675ce5
>            Name : reading.homeunix.com:3
>   Creation Time : Tue Jul 26 19:00:28 2011
>      Raid Level : raid0
>    Raid Devices : 3
>
>  Avail Dev Size : 62908541 (30.00 GiB 32.21 GB)
>     Data Offset : 2048 sectors
>    Super Offset : 8 sectors
>           State : clean
>     Device UUID : a0efc1b3:94cc6eb8:deea76ca:772b2d2d
>
>     Update Time : Tue Jul 26 19:00:28 2011
>        Checksum : 9eba9119 - correct
>          Events : 0
>
>      Chunk Size : 512K
>
>    Device Role : Active device 2
>    Array State : AAA ('A' == active, '.' == missing)
> mdadm: No md superblock detected on /dev/sdc6.
> [root@lamachine ~]#
>
> Regards,
>
> Daniel
>
> On 10 February 2013 06:29, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
>> On Sat, 9 Feb 2013, Daniel Sanabria wrote:
>>
>>> After the upgrade I had to assemble the array manually and didn't get any
>>> errors but I was still getting the mount problem. I went ahead and recreated
>>> it with mdadm --create --assume-clean and still the smae result.
>>
>>
>> Did you save mdadm --examine from the drives *before* you did this?
>>
>> --
>> Mikael Abrahamsson    email: swmike@swm.pp.se
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: help please, can't mount/recover raid 5 array
  2013-02-10  9:36     ` Daniel Sanabria
@ 2013-02-10 21:05       ` Phil Turmel
  2013-02-10 22:01         ` Dave Cundiff
  0 siblings, 1 reply; 10+ messages in thread
From: Phil Turmel @ 2013-02-10 21:05 UTC (permalink / raw)
  To: Daniel Sanabria; +Cc: Mikael Abrahamsson, linux-raid

Hi Daniel,

On 02/10/2013 04:36 AM, Daniel Sanabria wrote:
> On 10 February 2013 09:17, Daniel Sanabria <sanabria.d@gmail.com> wrote:
>> Hi Mikael,
>>
>> Yes I did. Here it is:

[trim /]

>> /dev/sda3:
>>           Magic : a92b4efc
>>         Version : 0.90.00

=====================^^^^^^^

>>            UUID : 0deb6f79:aec7ed69:bfe78010:bc810f04
>>   Creation Time : Thu Dec  3 22:12:24 2009
>>      Raid Level : raid5
>>   Used Dev Size : 255999936 (244.14 GiB 262.14 GB)
>>      Array Size : 511999872 (488.28 GiB 524.29 GB)
>>    Raid Devices : 3
>>   Total Devices : 3
>> Preferred Minor : 2
>>
>>     Update Time : Sat Feb  9 16:09:20 2013
>>           State : clean
>>  Active Devices : 3
>> Working Devices : 3
>>  Failed Devices : 0
>>   Spare Devices : 0
>>        Checksum : 8dd157e5 - correct
>>          Events : 792552
>>
>>          Layout : left-symmetric
>>      Chunk Size : 64K

=====================^^^

>>
>>       Number   Major   Minor   RaidDevice State
>> this     0       8        3        0      active sync   /dev/sda3
>>
>>    0     0       8        3        0      active sync   /dev/sda3
>>    1     1       8       18        1      active sync   /dev/sdb2
>>    2     2       8       34        2      active sync   /dev/sdc2

From your original post:

> /dev/md2:
>         Version : 1.2

====================^^^

>   Creation Time : Sat Feb  9 17:30:32 2013
>      Raid Level : raid5
>      Array Size : 511996928 (488.28 GiB 524.28 GB)
>   Used Dev Size : 255998464 (244.14 GiB 262.14 GB)
>    Raid Devices : 3
>   Total Devices : 3
>     Persistence : Superblock is persistent
> 
>     Update Time : Sat Feb  9 20:47:46 2013
>           State : clean
>  Active Devices : 3
> Working Devices : 3
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : left-symmetric
>      Chunk Size : 512K

====================^^^^

> 
>            Name : lamachine:2  (local to host lamachine)
>            UUID : 48be851b:f0210b64:e9fbefdf:24c84c5f
>          Events : 2
> 
>     Number   Major   Minor   RaidDevice State
>        0       8        3        0      active sync   /dev/sda3
>        1       8       18        1      active sync   /dev/sdb2
>        2       8       34        2      active sync   /dev/sdc2

I don't know what possessed you to use "mdadm --create" to try to fix
your system, but it is almost always the wrong first step.  But since
you scrambled it with "mdadm --create", you'll have to fix it with
"mdadm --create".

mdadm --stop /dev/md2

mdadm --create --assume-clean /dev/md2 --metadata=0.90 \
	--level=5 --raid-devices=3 --chunk=64 \
	/dev/sda3 /dev/sdb2 /dev/sdc2

Then, you will have to reconstruct the beginning of the array, as much
as 3MB worth, that was replaced with v1.2 metadata.  (The used dev size
differs by 1472kB, suggesting that the new mdadm gave you a new data
offset of 2048, and the rest is the difference in the chunk size.)

Your original report and follow-ups have not clearly indicated what is
on this 524GB array, so I can't be more specific.  If it is a
filesystem, an fsck may fix it with modest losses.

If it is another LVM PV, you may be able to do a vgrestore to reset the
1st megabyte.  You didn't activate a bitmap on the array, so the
remainder of the new metadata space was probably untouched.

HTH,

Phil

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: help please, can't mount/recover raid 5 array
  2013-02-10 21:05       ` Phil Turmel
@ 2013-02-10 22:01         ` Dave Cundiff
  2013-02-11 12:49           ` Daniel Sanabria
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Cundiff @ 2013-02-10 22:01 UTC (permalink / raw)
  To: Phil Turmel; +Cc: Daniel Sanabria, Mikael Abrahamsson, linux-raid

On Sun, Feb 10, 2013 at 4:05 PM, Phil Turmel <philip@turmel.org> wrote:
> Hi Daniel,
>
> On 02/10/2013 04:36 AM, Daniel Sanabria wrote:
>> On 10 February 2013 09:17, Daniel Sanabria <sanabria.d@gmail.com> wrote:
>>> Hi Mikael,
>>>
>>> Yes I did. Here it is:
>
> [trim /]
>
>>> /dev/sda3:
>>>           Magic : a92b4efc
>>>         Version : 0.90.00
>
> =====================^^^^^^^
>
>>>            UUID : 0deb6f79:aec7ed69:bfe78010:bc810f04
>>>   Creation Time : Thu Dec  3 22:12:24 2009
>>>      Raid Level : raid5
>>>   Used Dev Size : 255999936 (244.14 GiB 262.14 GB)
>>>      Array Size : 511999872 (488.28 GiB 524.29 GB)
>>>    Raid Devices : 3
>>>   Total Devices : 3
>>> Preferred Minor : 2
>>>
>>>     Update Time : Sat Feb  9 16:09:20 2013
>>>           State : clean
>>>  Active Devices : 3
>>> Working Devices : 3
>>>  Failed Devices : 0
>>>   Spare Devices : 0
>>>        Checksum : 8dd157e5 - correct
>>>          Events : 792552
>>>
>>>          Layout : left-symmetric
>>>      Chunk Size : 64K
>
> =====================^^^
>
>>>
>>>       Number   Major   Minor   RaidDevice State
>>> this     0       8        3        0      active sync   /dev/sda3
>>>
>>>    0     0       8        3        0      active sync   /dev/sda3
>>>    1     1       8       18        1      active sync   /dev/sdb2
>>>    2     2       8       34        2      active sync   /dev/sdc2
>
> From your original post:
>
>> /dev/md2:
>>         Version : 1.2
>
> ====================^^^
>
>>   Creation Time : Sat Feb  9 17:30:32 2013
>>      Raid Level : raid5
>>      Array Size : 511996928 (488.28 GiB 524.28 GB)
>>   Used Dev Size : 255998464 (244.14 GiB 262.14 GB)
>>    Raid Devices : 3
>>   Total Devices : 3
>>     Persistence : Superblock is persistent
>>
>>     Update Time : Sat Feb  9 20:47:46 2013
>>           State : clean
>>  Active Devices : 3
>> Working Devices : 3
>>  Failed Devices : 0
>>   Spare Devices : 0
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>
> ====================^^^^
>
>>
>>            Name : lamachine:2  (local to host lamachine)
>>            UUID : 48be851b:f0210b64:e9fbefdf:24c84c5f
>>          Events : 2
>>
>>     Number   Major   Minor   RaidDevice State
>>        0       8        3        0      active sync   /dev/sda3
>>        1       8       18        1      active sync   /dev/sdb2
>>        2       8       34        2      active sync   /dev/sdc2
>
> I don't know what possessed you to use "mdadm --create" to try to fix
> your system, but it is almost always the wrong first step.  But since
> you scrambled it with "mdadm --create", you'll have to fix it with
> "mdadm --create".
>
> mdadm --stop /dev/md2
>
> mdadm --create --assume-clean /dev/md2 --metadata=0.90 \
>         --level=5 --raid-devices=3 --chunk=64 \
>         /dev/sda3 /dev/sdb2 /dev/sdc2
>

It looks like your using a dracut based boot system. Once you get the
array created and mounting you'll need to update /etc/mdadm.conf with
the new array information and run dracut to update your initrd with
the new configuration. If not problems could crop up down the road.

> Then, you will have to reconstruct the beginning of the array, as much
> as 3MB worth, that was replaced with v1.2 metadata.  (The used dev size
> differs by 1472kB, suggesting that the new mdadm gave you a new data
> offset of 2048, and the rest is the difference in the chunk size.)
>
> Your original report and follow-ups have not clearly indicated what is
> on this 524GB array, so I can't be more specific.  If it is a
> filesystem, an fsck may fix it with modest losses.
>
> If it is another LVM PV, you may be able to do a vgrestore to reset the
> 1st megabyte.  You didn't activate a bitmap on the array, so the
> remainder of the new metadata space was probably untouched.
>

If the data on this array is important and without backups it would be
a good idea to image the drives before you start doing anything else.
Most of your data can likely be recovered but you can easily destroy
it beyond conventional repair if your not very careful at this point.

According to the fstab in the original post it looks like its just an
ext4 filesystem on top of the md. If that is the case an fsck should
get you going again after creating the array. You can try a regular
fsck but your superblock is most likely gone. A backup superblock if
needed is generally accessible by adding -b 32768 to the fsck.
Hopefully you didn't have many files in the root of that filesystem.
They are all most likely going to end up as random numbered files and
directories in lost+found.


--
Dave Cundiff
System Administrator
A2Hosting, Inc
http://www.a2hosting.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: help please, can't mount/recover raid 5 array
  2013-02-10 22:01         ` Dave Cundiff
@ 2013-02-11 12:49           ` Daniel Sanabria
  2013-02-11 16:30             ` Mikael Abrahamsson
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Sanabria @ 2013-02-11 12:49 UTC (permalink / raw)
  To: Dave Cundiff; +Cc: Phil Turmel, Mikael Abrahamsson, linux-raid

Thanks a million Guys !!! I re-created the RAID and fsck it and it's
mounting fine now. The array is my /home partition and I can't see any
significant losses. But I'm still not sure what happened, I mean what
consideration should I take next time I upgrade?

Thanks again,

Daniel

On 10 February 2013 22:01, Dave Cundiff <syshackmin@gmail.com> wrote:
> On Sun, Feb 10, 2013 at 4:05 PM, Phil Turmel <philip@turmel.org> wrote:
>> Hi Daniel,
>>
>> On 02/10/2013 04:36 AM, Daniel Sanabria wrote:
>>> On 10 February 2013 09:17, Daniel Sanabria <sanabria.d@gmail.com> wrote:
>>>> Hi Mikael,
>>>>
>>>> Yes I did. Here it is:
>>
>> [trim /]
>>
>>>> /dev/sda3:
>>>>           Magic : a92b4efc
>>>>         Version : 0.90.00
>>
>> =====================^^^^^^^
>>
>>>>            UUID : 0deb6f79:aec7ed69:bfe78010:bc810f04
>>>>   Creation Time : Thu Dec  3 22:12:24 2009
>>>>      Raid Level : raid5
>>>>   Used Dev Size : 255999936 (244.14 GiB 262.14 GB)
>>>>      Array Size : 511999872 (488.28 GiB 524.29 GB)
>>>>    Raid Devices : 3
>>>>   Total Devices : 3
>>>> Preferred Minor : 2
>>>>
>>>>     Update Time : Sat Feb  9 16:09:20 2013
>>>>           State : clean
>>>>  Active Devices : 3
>>>> Working Devices : 3
>>>>  Failed Devices : 0
>>>>   Spare Devices : 0
>>>>        Checksum : 8dd157e5 - correct
>>>>          Events : 792552
>>>>
>>>>          Layout : left-symmetric
>>>>      Chunk Size : 64K
>>
>> =====================^^^
>>
>>>>
>>>>       Number   Major   Minor   RaidDevice State
>>>> this     0       8        3        0      active sync   /dev/sda3
>>>>
>>>>    0     0       8        3        0      active sync   /dev/sda3
>>>>    1     1       8       18        1      active sync   /dev/sdb2
>>>>    2     2       8       34        2      active sync   /dev/sdc2
>>
>> From your original post:
>>
>>> /dev/md2:
>>>         Version : 1.2
>>
>> ====================^^^
>>
>>>   Creation Time : Sat Feb  9 17:30:32 2013
>>>      Raid Level : raid5
>>>      Array Size : 511996928 (488.28 GiB 524.28 GB)
>>>   Used Dev Size : 255998464 (244.14 GiB 262.14 GB)
>>>    Raid Devices : 3
>>>   Total Devices : 3
>>>     Persistence : Superblock is persistent
>>>
>>>     Update Time : Sat Feb  9 20:47:46 2013
>>>           State : clean
>>>  Active Devices : 3
>>> Working Devices : 3
>>>  Failed Devices : 0
>>>   Spare Devices : 0
>>>
>>>          Layout : left-symmetric
>>>      Chunk Size : 512K
>>
>> ====================^^^^
>>
>>>
>>>            Name : lamachine:2  (local to host lamachine)
>>>            UUID : 48be851b:f0210b64:e9fbefdf:24c84c5f
>>>          Events : 2
>>>
>>>     Number   Major   Minor   RaidDevice State
>>>        0       8        3        0      active sync   /dev/sda3
>>>        1       8       18        1      active sync   /dev/sdb2
>>>        2       8       34        2      active sync   /dev/sdc2
>>
>> I don't know what possessed you to use "mdadm --create" to try to fix
>> your system, but it is almost always the wrong first step.  But since
>> you scrambled it with "mdadm --create", you'll have to fix it with
>> "mdadm --create".
>>
>> mdadm --stop /dev/md2
>>
>> mdadm --create --assume-clean /dev/md2 --metadata=0.90 \
>>         --level=5 --raid-devices=3 --chunk=64 \
>>         /dev/sda3 /dev/sdb2 /dev/sdc2
>>
>
> It looks like your using a dracut based boot system. Once you get the
> array created and mounting you'll need to update /etc/mdadm.conf with
> the new array information and run dracut to update your initrd with
> the new configuration. If not problems could crop up down the road.
>
>> Then, you will have to reconstruct the beginning of the array, as much
>> as 3MB worth, that was replaced with v1.2 metadata.  (The used dev size
>> differs by 1472kB, suggesting that the new mdadm gave you a new data
>> offset of 2048, and the rest is the difference in the chunk size.)
>>
>> Your original report and follow-ups have not clearly indicated what is
>> on this 524GB array, so I can't be more specific.  If it is a
>> filesystem, an fsck may fix it with modest losses.
>>
>> If it is another LVM PV, you may be able to do a vgrestore to reset the
>> 1st megabyte.  You didn't activate a bitmap on the array, so the
>> remainder of the new metadata space was probably untouched.
>>
>
> If the data on this array is important and without backups it would be
> a good idea to image the drives before you start doing anything else.
> Most of your data can likely be recovered but you can easily destroy
> it beyond conventional repair if your not very careful at this point.
>
> According to the fstab in the original post it looks like its just an
> ext4 filesystem on top of the md. If that is the case an fsck should
> get you going again after creating the array. You can try a regular
> fsck but your superblock is most likely gone. A backup superblock if
> needed is generally accessible by adding -b 32768 to the fsck.
> Hopefully you didn't have many files in the root of that filesystem.
> They are all most likely going to end up as random numbered files and
> directories in lost+found.
>
>
> --
> Dave Cundiff
> System Administrator
> A2Hosting, Inc
> http://www.a2hosting.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: help please, can't mount/recover raid 5 array
  2013-02-11 12:49           ` Daniel Sanabria
@ 2013-02-11 16:30             ` Mikael Abrahamsson
  2013-02-11 16:39               ` Daniel Sanabria
  0 siblings, 1 reply; 10+ messages in thread
From: Mikael Abrahamsson @ 2013-02-11 16:30 UTC (permalink / raw)
  To: Daniel Sanabria; +Cc: linux-raid

On Mon, 11 Feb 2013, Daniel Sanabria wrote:

> Thanks a million Guys !!! I re-created the RAID and fsck it and it's 
> mounting fine now. The array is my /home partition and I can't see any 
> significant losses. But I'm still not sure what happened, I mean what 
> consideration should I take next time I upgrade?

First of all: --create --assume-clean is a huge big enormous hammer. It 
shouldn't be used unless absolutely necessary, when all other options are 
exhausted.

I am not aware of any gotchas when upgrading, I have done it numerous 
times, even moved array drives between machines, and it's worked well so 
far. From your information there isn't really any way to tell what 
happened.

So best way before next upgrade is to save mdadm --examine and hope it 
works properly (it should). If it doesn't, please send an email to the 
list with as much information as possible (mdadm --examine, dmesg etc) and 
see where that leads before using the big hammer.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: help please, can't mount/recover raid 5 array
  2013-02-11 16:30             ` Mikael Abrahamsson
@ 2013-02-11 16:39               ` Daniel Sanabria
  0 siblings, 0 replies; 10+ messages in thread
From: Daniel Sanabria @ 2013-02-11 16:39 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: linux-raid

Sure,

Thanks again Mikael.

Daniel

On 11 February 2013 16:30, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Mon, 11 Feb 2013, Daniel Sanabria wrote:
>
>> Thanks a million Guys !!! I re-created the RAID and fsck it and it's
>> mounting fine now. The array is my /home partition and I can't see any
>> significant losses. But I'm still not sure what happened, I mean what
>> consideration should I take next time I upgrade?
>
>
> First of all: --create --assume-clean is a huge big enormous hammer. It
> shouldn't be used unless absolutely necessary, when all other options are
> exhausted.
>
> I am not aware of any gotchas when upgrading, I have done it numerous times,
> even moved array drives between machines, and it's worked well so far. From
> your information there isn't really any way to tell what happened.
>
> So best way before next upgrade is to save mdadm --examine and hope it works
> properly (it should). If it doesn't, please send an email to the list with
> as much information as possible (mdadm --examine, dmesg etc) and see where
> that leads before using the big hammer.
>
>
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2013-02-11 16:39 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-09 21:03 help please, can't mount/recover raid 5 array Daniel Sanabria
2013-02-09 23:00 ` Dave Cundiff
2013-02-10  9:17   ` Daniel Sanabria
2013-02-10  6:29 ` Mikael Abrahamsson
     [not found]   ` <CAHscji0h5nHUssKi23BMfR=Ek+jSH+vK0odYNWkzrVDf6t18mw@mail.gmail.com>
2013-02-10  9:36     ` Daniel Sanabria
2013-02-10 21:05       ` Phil Turmel
2013-02-10 22:01         ` Dave Cundiff
2013-02-11 12:49           ` Daniel Sanabria
2013-02-11 16:30             ` Mikael Abrahamsson
2013-02-11 16:39               ` Daniel Sanabria

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.