* [linux-lvm] Unable to mount LVM partition - table too small
@ 2010-09-07 17:34 Adam Newham
2010-09-20 20:52 ` Adam Newham
2010-09-22 6:21 ` Luca Berra
0 siblings, 2 replies; 8+ messages in thread
From: Adam Newham @ 2010-09-07 17:34 UTC (permalink / raw)
To: linux-lvm
I didn�t see this getting posted, so re-posting. Sorry if you get this
twice.
Hi, hopefully somebody on this mailing list will be able to assist. I�ve
done lots of Googling and tried a few things but with no success.
I recently had multiple hardware failures and had to re-install the OS.
My server is setup with an OS drive and a data drive. The OS drive is a
single HDD which had a RHEL5 based distro on it (ClearOS) while the data
drive consists of a software raid level 5 partition across 4x 1TB drives
(2.7B available after ext3 format, with 1TB used). On top of this is an
LVM2 partition with a single PV/LV/LG spanning the whole RAID partition.
The hardware failures that I had were memory & motherboard with the
first RMA motherboard powering off sporadically (see note below) .
However after completing the OS re-install, I�m unable to access the LVM
partition. I�ve originally tried Ubuntu 10.04, which once mdadm/lvm2
were installed - the distro saw the RAID and LVM container but I�m
unable to mount the symbolic link (/dev/lvm-raid5/lv0) or the dev mapper
link (/dev/mapper/lvm-raid5-lvm0). (See logs below) - one thing to note,
as soon as the distro was installed and the RAID was assembled, a
re-sync occurred. This wasn�t entirely unexpected as the first RMA�d
motherboard was defective and would power off during the boot sequence
and forced a check of the disc during boot which only got a few % into
the sequence before a kernel panic was observed (/etc/stab was modified
by booting into rescue mode and disabling this once I realized it was
happening).
Thinking maybe it was something with the Ubuntu distro, I tried
installing CentOS 5.5 (and the original ClearOS distro) but both these
distro�s give the same results. I can auto-create the /etc/mdadm.conf
file by mdadm �detail �scan or mdadm �examine �scan but they can�t see
any Physical/Logical volumes. One interesting point to note here is the
/proc/partitions does not contain /dev/sda1�/dev/sdd1 etc. just the raw
drives. Fdisk �l however shows all of the partitions information. I
believe there is an issue with some Redhat based distro�s with how /dev
is populated � specically it was introduced in FC10/11. I tried FC9 but
got similar results as the RHEL5 based distro�s.
I�d really like to get this data back, I have some backups (the discs
contained Video, Music & Photo�s) in the form of original CD & DVD�s but
for the Photo�s due to some other hardware failures, I have a gap from
March 2008 until around April 2010.
So here are the logs from what I can determine:
Ubuntu 10.04
/proc/partitions
major minor #blocks name
8 0 976762584 sda
8 1 976760001 sda1
8 16 976762584 sdb
8 17 976760001 sdb1
8 32 976762584 sdc
8 33 976760001 sdc1
8 48 976762584 sdd
8 49 976760001 sdd1
8 64 58605120 sde
8 65 56165376 sde1
8 66 1 sde2
8 69 2437120 sde5
9 0 2930287488 md0
259 0 976760001 md0p1
/proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md0 : active raid5 sdc[2] sdb[1] sda[0] sdd[3]
2930287488 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
fdisk �l
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 1 121601 976760001 fd Linux raid autodetect
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 1 121601 976760001 fd Linux raid autodetect
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdc1 1 121601 976760001 fd Linux raid autodetect
Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdd1 1 121601 976760001 fd Linux raid autodetect
Disk /dev/sde: 60.0 GB, 60011642880 bytes
255 heads, 63 sectors/track, 7296 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005cd42
Device Boot Start End Blocks Id System
/dev/sde1 * 1 6993 56165376 83 Linux
/dev/sde2 6993 7296 2437121 5 Extended
/dev/sde5 6993 7296 2437120 82 Linux swap / Solaris
Disk /dev/md0: 3000.6 GB, 3000614387712 bytes
255 heads, 63 sectors/track, 364803 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 196608 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/md0p1 1 121601 976760001 fd Linux raid autodetect
Partition 1 does not start on physical sector boundary.
pvscan
PV /dev/md0p1 VG lvm-raid5 lvm2 [2.73 TiB / 0 free]
Total: 1 [746.53 GiB] / in use: 1 [746.53 GiB] / in no VG: 0 [0 ]
lvscan
Reading all physical volumes. This may take a while...
Found volume group "lvm-raid5" using metadata type lvm2
vgscan
Reading all physical volumes. This may take a while...
Found volume group "lvm-raid5" using metadata type lvm2
vgdisplay
--- Volume group ---
VG Name lvm-raid5
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 2.73 TiB
PE Size 32.00 MiB
Total PE 89425
Alloc PE / Size 89425 / 2.73 TiB
Free PE / Size 0 / 0
VG UUID wovrCm-knof-Ycdl-LdXt-4t28-mPWq-kngufG
lvmdiskscan
/dev/ram0 [ 64.00 MiB]
/dev/md0p1 [ 931.51 GiB] LVM physical volume
/dev/ram1 [ 64.00 MiB]
/dev/ram2 [ 64.00 MiB]
/dev/ram3 [ 64.00 MiB]
/dev/ram4 [ 64.00 MiB]
/dev/ram5 [ 64.00 MiB]
/dev/ram6 [ 64.00 MiB]
/dev/ram7 [ 64.00 MiB]
/dev/ram8 [ 64.00 MiB]
/dev/ram9 [ 64.00 MiB]
/dev/ram10 [ 64.00 MiB]
/dev/ram11 [ 64.00 MiB]
/dev/ram12 [ 64.00 MiB]
/dev/ram13 [ 64.00 MiB]
/dev/ram14 [ 64.00 MiB]
/dev/ram15 [ 64.00 MiB]
/dev/root [ 53.56 GiB]
/dev/sde5 [ 2.32 GiB]
1 disk
17 partitions
0 LVM physical volume whole disks
1 LVM physical volume
tail /var/log/messages (after mdadm �assemble /dev/md0 and mount
/dev/lvm-raid5/lvm0 /mnt/lvm-raid5
Sep 3 18:46:13 adam-desktop kernel: [ 479.014444] md: bind<sdb>
Sep 3 18:46:13 adam-desktop kernel: [ 479.015421] md: bind<sdc>
Sep 3 18:46:13 adam-desktop kernel: [ 479.015753] md: bind<sdd>
Sep 3 18:46:13 adam-desktop kernel: [ 479.016272] md: bind<sda>
Sep 3 18:46:13 adam-desktop kernel: [ 479.022937] raid5: device sda
operational as raid disk 0
Sep 3 18:46:13 adam-desktop kernel: [ 479.022944] raid5: device sdd
operational as raid disk 3
Sep 3 18:46:13 adam-desktop kernel: [ 479.022950] raid5: device sdc
operational as raid disk 2
Sep 3 18:46:13 adam-desktop kernel: [ 479.022955] raid5: device sdb
operational as raid disk 1
Sep 3 18:46:13 adam-desktop kernel: [ 479.023690] raid5: allocated
4222kB for md0
Sep 3 18:46:13 adam-desktop kernel: [ 479.024690] 0: w=1 pa=0 pr=4 m=1
a=2 r=4 op1=0 op2=0
Sep 3 18:46:13 adam-desktop kernel: [ 479.024697] 3: w=2 pa=0 pr=4 m=1
a=2 r=4 op1=0 op2=0
Sep 3 18:46:13 adam-desktop kernel: [ 479.024703] 2: w=3 pa=0 pr=4 m=1
a=2 r=4 op1=0 op2=0
Sep 3 18:46:13 adam-desktop kernel: [ 479.024709] 1: w=4 pa=0 pr=4 m=1
a=2 r=4 op1=0 op2=0
Sep 3 18:46:13 adam-desktop kernel: [ 479.024715] raid5: raid level 5
set md0 active with 4 out of 4 devices, algorithm 2
Sep 3 18:46:13 adam-desktop kernel: [ 479.024719] RAID5 conf printout:
Sep 3 18:46:13 adam-desktop kernel: [ 479.024722] --- rd:4 wd:4
Sep 3 18:46:13 adam-desktop kernel: [ 479.024726] disk 0, o:1, dev:sda
Sep 3 18:46:13 adam-desktop kernel: [ 479.024730] disk 1, o:1, dev:sdb
Sep 3 18:46:13 adam-desktop kernel: [ 479.024734] disk 2, o:1, dev:sdc
Sep 3 18:46:13 adam-desktop kernel: [ 479.024737] disk 3, o:1, dev:sdd
Sep 3 18:46:13 adam-desktop kernel: [ 479.024823] md0: detected capacity
change from 0 to 3000614387712
Sep 3 18:46:13 adam-desktop kernel: [ 479.028687] md0: p1
Sep 3 18:46:13 adam-desktop kernel: [ 479.207359] device-mapper: table:
252:0: md0p1 too small for target: start=384, len=5860556800,
dev_size=1953520002
mdadm �detail /dev/md0
/dev/md0:
Version : 00.90
Creation Time : Sat Nov 1 22:14:18 2008
Raid Level : raid5
Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Sep 3 18:39:58 2010
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : b5e0fcd0:cfadbb04:a5b6f22e:457f47ae
Events : 0.68
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
3 8 48 3 active sync /dev/sdd
mdadm �detail /dev/sda1
ARRAY /dev/md0 level=raid5 num-devices=4
UUID=08558923:881d9efd:464c249d:988d2ec6
Note: performing this for /dev/sdb1�/dev/sdd1 produce no output. As the
UUID for /dev/md0 is above, I remove this line from the mdadm.conf file.
As I don�t have the original /etc/lvm info, here is what I managed to
recover by doing a dd from the discs/cut pasted into a lvm template.
/etc/lvm/backup/lvm_raid5_00000.vg
# Generated by LVM2 version 2.02.37-RHEL4 (2008-06-06): Tue Nov 18
13:45:06 2008
contents = "Text Format Volume Group"
version = 1
description = ""
creation_host = "pebblebeach.thenewhams.lan" # Linux
pebblebeach.thenewhams.lan 2.6.27 #4 SMP Mon Nov 17 11:05:05 PST 2008 i686
creation_time = 1227044706 # Tue Nov 18 13:45:06 2008
lvm-raid5 {
id = "wovrCm-knof-Ycdl-LdXt-4t28-mPWq-kngufG"
seqno = 2
status = ["RESIZEABLE", "READ", "WRITE"]
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "aBkcEY-nZho-iWe5-700D-kDSy-pTAK-sJJFYm"
device = "/dev/md0p1" # Hint only
status = ["ALLOCATABLE"]
pe_start = 384
pe_count = 89425
}
}
logical_volumes {
lvm0 {
id = "lzHyck-6X6E-48pC-uW1N-OQmp-Ayjt-vbAvVR"
status = ["READ", "WRITE", "VISIBLE"]
segment_count = 1
segment1 {
start_extent = 0
extent_count = 89425
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
}
}
Some info from when in Centos/EL5 land�
/proc/partitions (note the missing sub partitions � this I why I belive
the lv/pv scan�s don�t see any LVM info)
major minor #blocks name
3 0 156290904 hda
3 1 200781 hda1
3 2 4192965 hda2
3 3 151894575 hda3
8 0 976762584 sda
8 16 976762584 sdb
8 32 976762584 sdc
8 48 976762584 sdd
9 0 2930287488 md0
/proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda[0] sdd[3] sdc[2] sdb[1]
2930287488 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Unable to mount LVM partition - table too small
2010-09-07 17:34 [linux-lvm] Unable to mount LVM partition - table too small Adam Newham
@ 2010-09-20 20:52 ` Adam Newham
2010-09-22 6:21 ` Luca Berra
1 sibling, 0 replies; 8+ messages in thread
From: Adam Newham @ 2010-09-20 20:52 UTC (permalink / raw)
To: LVM general discussion and development
Bump....no one?
On 9/7/2010 10:34 AM, Adam Newham wrote:
> I didn�t see this getting posted, so re-posting. Sorry if you get this
> twice.
>
> Hi, hopefully somebody on this mailing list will be able to assist. I�ve
> done lots of Googling and tried a few things but with no success.
>
> I recently had multiple hardware failures and had to re-install the OS.
> My server is setup with an OS drive and a data drive. The OS drive is a
> single HDD which had a RHEL5 based distro on it (ClearOS) while the data
> drive consists of a software raid level 5 partition across 4x 1TB drives
> (2.7B available after ext3 format, with 1TB used). On top of this is an
> LVM2 partition with a single PV/LV/LG spanning the whole RAID partition.
>
> The hardware failures that I had were memory& motherboard with the
> first RMA motherboard powering off sporadically (see note below) .
>
> However after completing the OS re-install, I�m unable to access the LVM
> partition. I�ve originally tried Ubuntu 10.04, which once mdadm/lvm2
> were installed - the distro saw the RAID and LVM container but I�m
> unable to mount the symbolic link (/dev/lvm-raid5/lv0) or the dev mapper
> link (/dev/mapper/lvm-raid5-lvm0). (See logs below) - one thing to note,
> as soon as the distro was installed and the RAID was assembled, a
> re-sync occurred. This wasn�t entirely unexpected as the first RMA�d
> motherboard was defective and would power off during the boot sequence
> and forced a check of the disc during boot which only got a few % into
> the sequence before a kernel panic was observed (/etc/stab was modified
> by booting into rescue mode and disabling this once I realized it was
> happening).
>
> Thinking maybe it was something with the Ubuntu distro, I tried
> installing CentOS 5.5 (and the original ClearOS distro) but both these
> distro�s give the same results. I can auto-create the /etc/mdadm.conf
> file by mdadm �detail �scan or mdadm �examine �scan but they can�t see
> any Physical/Logical volumes. One interesting point to note here is the
> /proc/partitions does not contain /dev/sda1�/dev/sdd1 etc. just the raw
> drives. Fdisk �l however shows all of the partitions information. I
> believe there is an issue with some Redhat based distro�s with how /dev
> is populated � specically it was introduced in FC10/11. I tried FC9 but
> got similar results as the RHEL5 based distro�s.
>
> I�d really like to get this data back, I have some backups (the discs
> contained Video, Music& Photo�s) in the form of original CD& DVD�s but
> for the Photo�s due to some other hardware failures, I have a gap from
> March 2008 until around April 2010.
>
> So here are the logs from what I can determine:
>
> Ubuntu 10.04
>
> /proc/partitions
>
> major minor #blocks name
>
> 8 0 976762584 sda
> 8 1 976760001 sda1
> 8 16 976762584 sdb
> 8 17 976760001 sdb1
> 8 32 976762584 sdc
> 8 33 976760001 sdc1
> 8 48 976762584 sdd
> 8 49 976760001 sdd1
> 8 64 58605120 sde
> 8 65 56165376 sde1
> 8 66 1 sde2
> 8 69 2437120 sde5
> 9 0 2930287488 md0
> 259 0 976760001 md0p1
>
> /proc/mdstat
>
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md0 : active raid5 sdc[2] sdb[1] sda[0] sdd[3]
> 2930287488 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
>
> unused devices:<none>
>
> fdisk �l
>
> Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
> Device Boot Start End Blocks Id System
> /dev/sda1 1 121601 976760001 fd Linux raid autodetect
>
> Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
> Device Boot Start End Blocks Id System
> /dev/sdb1 1 121601 976760001 fd Linux raid autodetect
>
> Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
> Device Boot Start End Blocks Id System
> /dev/sdc1 1 121601 976760001 fd Linux raid autodetect
>
> Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
> Device Boot Start End Blocks Id System
> /dev/sdd1 1 121601 976760001 fd Linux raid autodetect
>
> Disk /dev/sde: 60.0 GB, 60011642880 bytes
> 255 heads, 63 sectors/track, 7296 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x0005cd42
>
> Device Boot Start End Blocks Id System
> /dev/sde1 * 1 6993 56165376 83 Linux
> /dev/sde2 6993 7296 2437121 5 Extended
> /dev/sde5 6993 7296 2437120 82 Linux swap / Solaris
>
> Disk /dev/md0: 3000.6 GB, 3000614387712 bytes
> 255 heads, 63 sectors/track, 364803 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 65536 bytes / 196608 bytes
> Disk identifier: 0x00000000
>
> Device Boot Start End Blocks Id System
> /dev/md0p1 1 121601 976760001 fd Linux raid autodetect
> Partition 1 does not start on physical sector boundary.
>
> pvscan
> PV /dev/md0p1 VG lvm-raid5 lvm2 [2.73 TiB / 0 free]
> Total: 1 [746.53 GiB] / in use: 1 [746.53 GiB] / in no VG: 0 [0 ]
>
> lvscan
> Reading all physical volumes. This may take a while...
> Found volume group "lvm-raid5" using metadata type lvm2
>
> vgscan
> Reading all physical volumes. This may take a while...
> Found volume group "lvm-raid5" using metadata type lvm2
>
> vgdisplay
> --- Volume group ---
> VG Name lvm-raid5
> System ID
> Format lvm2
> Metadata Areas 1
> Metadata Sequence No 2
> VG Access read/write
> VG Status resizable
> MAX LV 0
> Cur LV 1
> Open LV 0
> Max PV 0
> Cur PV 1
> Act PV 1
> VG Size 2.73 TiB
> PE Size 32.00 MiB
> Total PE 89425
> Alloc PE / Size 89425 / 2.73 TiB
> Free PE / Size 0 / 0
> VG UUID wovrCm-knof-Ycdl-LdXt-4t28-mPWq-kngufG
>
> lvmdiskscan
> /dev/ram0 [ 64.00 MiB]
> /dev/md0p1 [ 931.51 GiB] LVM physical volume
> /dev/ram1 [ 64.00 MiB]
> /dev/ram2 [ 64.00 MiB]
> /dev/ram3 [ 64.00 MiB]
> /dev/ram4 [ 64.00 MiB]
> /dev/ram5 [ 64.00 MiB]
> /dev/ram6 [ 64.00 MiB]
> /dev/ram7 [ 64.00 MiB]
> /dev/ram8 [ 64.00 MiB]
> /dev/ram9 [ 64.00 MiB]
> /dev/ram10 [ 64.00 MiB]
> /dev/ram11 [ 64.00 MiB]
> /dev/ram12 [ 64.00 MiB]
> /dev/ram13 [ 64.00 MiB]
> /dev/ram14 [ 64.00 MiB]
> /dev/ram15 [ 64.00 MiB]
> /dev/root [ 53.56 GiB]
> /dev/sde5 [ 2.32 GiB]
> 1 disk
> 17 partitions
> 0 LVM physical volume whole disks
> 1 LVM physical volume
>
> tail /var/log/messages (after mdadm �assemble /dev/md0 and mount
> /dev/lvm-raid5/lvm0 /mnt/lvm-raid5
>
> Sep 3 18:46:13 adam-desktop kernel: [ 479.014444] md: bind<sdb>
> Sep 3 18:46:13 adam-desktop kernel: [ 479.015421] md: bind<sdc>
> Sep 3 18:46:13 adam-desktop kernel: [ 479.015753] md: bind<sdd>
> Sep 3 18:46:13 adam-desktop kernel: [ 479.016272] md: bind<sda>
> Sep 3 18:46:13 adam-desktop kernel: [ 479.022937] raid5: device sda
> operational as raid disk 0
> Sep 3 18:46:13 adam-desktop kernel: [ 479.022944] raid5: device sdd
> operational as raid disk 3
> Sep 3 18:46:13 adam-desktop kernel: [ 479.022950] raid5: device sdc
> operational as raid disk 2
> Sep 3 18:46:13 adam-desktop kernel: [ 479.022955] raid5: device sdb
> operational as raid disk 1
> Sep 3 18:46:13 adam-desktop kernel: [ 479.023690] raid5: allocated
> 4222kB for md0
> Sep 3 18:46:13 adam-desktop kernel: [ 479.024690] 0: w=1 pa=0 pr=4 m=1
> a=2 r=4 op1=0 op2=0
> Sep 3 18:46:13 adam-desktop kernel: [ 479.024697] 3: w=2 pa=0 pr=4 m=1
> a=2 r=4 op1=0 op2=0
> Sep 3 18:46:13 adam-desktop kernel: [ 479.024703] 2: w=3 pa=0 pr=4 m=1
> a=2 r=4 op1=0 op2=0
> Sep 3 18:46:13 adam-desktop kernel: [ 479.024709] 1: w=4 pa=0 pr=4 m=1
> a=2 r=4 op1=0 op2=0
> Sep 3 18:46:13 adam-desktop kernel: [ 479.024715] raid5: raid level 5
> set md0 active with 4 out of 4 devices, algorithm 2
> Sep 3 18:46:13 adam-desktop kernel: [ 479.024719] RAID5 conf printout:
> Sep 3 18:46:13 adam-desktop kernel: [ 479.024722] --- rd:4 wd:4
> Sep 3 18:46:13 adam-desktop kernel: [ 479.024726] disk 0, o:1, dev:sda
> Sep 3 18:46:13 adam-desktop kernel: [ 479.024730] disk 1, o:1, dev:sdb
> Sep 3 18:46:13 adam-desktop kernel: [ 479.024734] disk 2, o:1, dev:sdc
> Sep 3 18:46:13 adam-desktop kernel: [ 479.024737] disk 3, o:1, dev:sdd
> Sep 3 18:46:13 adam-desktop kernel: [ 479.024823] md0: detected capacity
> change from 0 to 3000614387712
> Sep 3 18:46:13 adam-desktop kernel: [ 479.028687] md0: p1
> Sep 3 18:46:13 adam-desktop kernel: [ 479.207359] device-mapper: table:
> 252:0: md0p1 too small for target: start=384, len=5860556800,
> dev_size=1953520002
>
> mdadm �detail /dev/md0
>
> /dev/md0:
> Version : 00.90
> Creation Time : Sat Nov 1 22:14:18 2008
> Raid Level : raid5
> Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
> Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
> Raid Devices : 4
> Total Devices : 4
> Preferred Minor : 0
> Persistence : Superblock is persistent
>
> Update Time : Fri Sep 3 18:39:58 2010
> State : clean
> Active Devices : 4
> Working Devices : 4
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> UUID : b5e0fcd0:cfadbb04:a5b6f22e:457f47ae
> Events : 0.68
>
> Number Major Minor RaidDevice State
> 0 8 0 0 active sync /dev/sda
> 1 8 16 1 active sync /dev/sdb
> 2 8 32 2 active sync /dev/sdc
> 3 8 48 3 active sync /dev/sdd
>
> mdadm �detail /dev/sda1
> ARRAY /dev/md0 level=raid5 num-devices=4
> UUID=08558923:881d9efd:464c249d:988d2ec6
>
> Note: performing this for /dev/sdb1�/dev/sdd1 produce no output. As the
> UUID for /dev/md0 is above, I remove this line from the mdadm.conf file.
>
> As I don�t have the original /etc/lvm info, here is what I managed to
> recover by doing a dd from the discs/cut pasted into a lvm template.
>
> /etc/lvm/backup/lvm_raid5_00000.vg
> # Generated by LVM2 version 2.02.37-RHEL4 (2008-06-06): Tue Nov 18
> 13:45:06 2008
>
> contents = "Text Format Volume Group"
> version = 1
>
> description = ""
>
> creation_host = "pebblebeach.thenewhams.lan" # Linux
> pebblebeach.thenewhams.lan 2.6.27 #4 SMP Mon Nov 17 11:05:05 PST 2008 i686
> creation_time = 1227044706 # Tue Nov 18 13:45:06 2008
>
> lvm-raid5 {
> id = "wovrCm-knof-Ycdl-LdXt-4t28-mPWq-kngufG"
> seqno = 2
> status = ["RESIZEABLE", "READ", "WRITE"]
> max_lv = 0
> max_pv = 0
>
> physical_volumes {
>
> pv0 {
> id = "aBkcEY-nZho-iWe5-700D-kDSy-pTAK-sJJFYm"
> device = "/dev/md0p1" # Hint only
>
> status = ["ALLOCATABLE"]
> pe_start = 384
> pe_count = 89425
> }
> }
>
> logical_volumes {
>
> lvm0 {
> id = "lzHyck-6X6E-48pC-uW1N-OQmp-Ayjt-vbAvVR"
> status = ["READ", "WRITE", "VISIBLE"]
> segment_count = 1
>
> segment1 {
> start_extent = 0
> extent_count = 89425
>
> type = "striped"
> stripe_count = 1 # linear
>
> stripes = [
> "pv0", 0
> ]
> }
> }
> }
> }
>
>
> Some info from when in Centos/EL5 land�
>
> /proc/partitions (note the missing sub partitions � this I why I belive
> the lv/pv scan�s don�t see any LVM info)
> major minor #blocks name
>
> 3 0 156290904 hda
> 3 1 200781 hda1
> 3 2 4192965 hda2
> 3 3 151894575 hda3
> 8 0 976762584 sda
> 8 16 976762584 sdb
> 8 32 976762584 sdc
> 8 48 976762584 sdd
> 9 0 2930287488 md0
>
> /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md0 : active raid5 sda[0] sdd[3] sdc[2] sdb[1]
> 2930287488 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
>
> unused devices:<none>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Unable to mount LVM partition - table too small
2010-09-07 17:34 [linux-lvm] Unable to mount LVM partition - table too small Adam Newham
2010-09-20 20:52 ` Adam Newham
@ 2010-09-22 6:21 ` Luca Berra
2010-09-22 15:39 ` Adam NEWHAM
1 sibling, 1 reply; 8+ messages in thread
From: Luca Berra @ 2010-09-22 6:21 UTC (permalink / raw)
To: linux-lvm
On Tue, Sep 07, 2010 at 10:34:55AM -0700, Adam Newham wrote:
> vgdisplay
> --- Volume group ---
> VG Name lvm-raid5
> System ID
> Format lvm2
> Metadata Areas 1
> Metadata Sequence No 2
> VG Access read/write
> VG Status resizable
> MAX LV 0
> Cur LV 1
> Open LV 0
> Max PV 0
> Cur PV 1
> Act PV 1
> VG Size 2.73 TiB
> PE Size 32.00 MiB
> Total PE 89425
> Alloc PE / Size 89425 / 2.73 TiB
> Free PE / Size 0 / 0
> VG UUID wovrCm-knof-Ycdl-LdXt-4t28-mPWq-kngufG
does vgchange -a y fail?
is there any error message
> /proc/partitions (note the missing sub partitions �V this I why I belive
> the lv/pv scan��s don��t see any LVM info)
> major minor #blocks name
>
> 3 0 156290904 hda
> 3 1 200781 hda1
> 3 2 4192965 hda2
> 3 3 151894575 hda3
> 8 0 976762584 sda
> 8 16 976762584 sdb
> 8 32 976762584 sdc
> 8 48 976762584 sdd
> 9 0 2930287488 md0
the partition info for md component devices is corectly removed from the
kernel, to avoid confusion
the md device itself should be partitionable, can i see your
/etc/mdadm.conf ?
L.
--
Luca Berra -- bluca@comedia.it
Communication Media & Services S.r.l.
/"\
\ / ASCII RIBBON CAMPAIGN
X AGAINST HTML MAIL
/ \
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Unable to mount LVM partition - table too small
2010-09-22 6:21 ` Luca Berra
@ 2010-09-22 15:39 ` Adam NEWHAM
2010-09-23 6:43 ` Luca Berra
0 siblings, 1 reply; 8+ messages in thread
From: Adam NEWHAM @ 2010-09-22 15:39 UTC (permalink / raw)
To: LVM general discussion and development
Thanks for looking into this. Here is the requested info, but I think something might be up with the array. I've captured additional info - I also have a screen capture from the Disk Utility but I will probably have to send that in a private email as it requires an attachment.
Here is my mdadm.conf (I recently uncommented the #DEVICE partitions as I thought Ubuntu might be picking up invalid metadata (see output from examine below). This is the same config used in the RHEL 5 install, again I tried with/without the DEVICE partitions line
#DEVICE partitions
ARRAY /dev/md0 level=raid5 num-devices=4 UUID=b5e0fcd0:cfadbb04:a5b6f22e:457f47ae
Here is what comes from mdmadm --examine --scan
ARRAY /dev/md0 level=raid5 num-devices=4 UUID=b5e0fcd0:cfadbb04:a5b6f22e:457f47ae
ARRAY /dev/md0 level=raid5 num-devices=4 UUID=08558923:881d9efd:464c249d:988d2ec6
This ties in with what the Disk Utility is seeing, hence why I deleted the second line as I think one of the disks has invalid meta data.
Performing examine on each of the RAID members gives:
mdadm: No md superblock detected on /dev/sda.
I've listed the other 3 drives below so that the most relevant info is at the start of this email.
vgchange -a y displays the following at the console:
$ sudo vgchange -a y lvm-raid5
device-mapper: resume ioctl failed: Invalid argument
Unable to resume lvm--raid5-lvm0 (252:0)
1 logical volume(s) in volume group "lvm-raid5" now active
With the following in /var/log/messages
kernel: [ 553.685856] device-mapper: table: 252:0: md0p1 too small for target: start=384, len=5860556800, dev_size=1953520002
But any attempt to mount the LVM results in:
mount: /dev/mapper/lvm--raid5-lvm0 already mounted or //mnt/lvm-raid5 busy
Obviously the mount is failing because something is out of whack.
/dev/sdb:
Magic : a92b4efc
Version : 00.90.03
UUID : b5e0fcd0:cfadbb04:a5b6f22e:457f47ae
Creation Time : Sat Nov 1 22:14:18 2008
Raid Level : raid5
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Mon Sep 20 19:24:26 2010
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : e9cec762 - correct
Events : 68
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 0 0 active sync /dev/sda
0 0 8 0 0 active sync /dev/sda
1 1 8 16 1 active sync /dev/sdb
2 2 8 32 2 active sync /dev/sdc
3 3 8 48 3 active sync /dev/sdd
/dev/sdc:
Magic : a92b4efc
Version : 00.90.03
UUID : b5e0fcd0:cfadbb04:a5b6f22e:457f47ae
Creation Time : Sat Nov 1 22:14:18 2008
Raid Level : raid5
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Mon Sep 20 19:24:26 2010
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : e9cec774 - correct
Events : 68
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 1 8 16 1 active sync /dev/sdb
0 0 8 0 0 active sync /dev/sda
1 1 8 16 1 active sync /dev/sdb
2 2 8 32 2 active sync /dev/sdc
3 3 8 48 3 active sync /dev/sdd
/dev/sdd:
Magic : a92b4efc
Version : 00.90.03
UUID : b5e0fcd0:cfadbb04:a5b6f22e:457f47ae
Creation Time : Sat Nov 1 22:14:18 2008
Raid Level : raid5
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Mon Sep 20 19:24:26 2010
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : e9cec786 - correct
Events : 68
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 2 8 32 2 active sync /dev/sdc
0 0 8 0 0 active sync /dev/sda
1 1 8 16 1 active sync /dev/sdb
2 2 8 32 2 active sync /dev/sdc
3 3 8 48 3 active sync /dev/sdd
-----Original Message-----
From: bluca@comedia.it [mailto:linux-lvm-bounces@redhat.com] On Behalf Of Luca Berra
Sent: Tuesday, September 21, 2010 11:21 PM
To: linux-lvm@redhat.com
Subject: Re: [linux-lvm] Unable to mount LVM partition - table too small
On Tue, Sep 07, 2010 at 10:34:55AM -0700, Adam Newham wrote:
> vgdisplay
> --- Volume group ---
> VG Name lvm-raid5
> System ID
> Format lvm2
> Metadata Areas 1
> Metadata Sequence No 2
> VG Access read/write
> VG Status resizable
> MAX LV 0
> Cur LV 1
> Open LV 0
> Max PV 0
> Cur PV 1
> Act PV 1
> VG Size 2.73 TiB
> PE Size 32.00 MiB
> Total PE 89425
> Alloc PE / Size 89425 / 2.73 TiB
> Free PE / Size 0 / 0
> VG UUID wovrCm-knof-Ycdl-LdXt-4t28-mPWq-kngufG
does vgchange -a y fail?
is there any error message
> /proc/partitions (note the missing sub partitions � this I why I belive
> the lv/pv scan�s don�t see any LVM info)
> major minor #blocks name
>
> 3 0 156290904 hda
> 3 1 200781 hda1
> 3 2 4192965 hda2
> 3 3 151894575 hda3
> 8 0 976762584 sda
> 8 16 976762584 sdb
> 8 32 976762584 sdc
> 8 48 976762584 sdd
> 9 0 2930287488 md0
the partition info for md component devices is corectly removed from the
kernel, to avoid confusion
the md device itself should be partitionable, can i see your
/etc/mdadm.conf ?
L.
--
Luca Berra -- bluca@comedia.it
Communication Media & Services S.r.l.
/"\
\ / ASCII RIBBON CAMPAIGN
X AGAINST HTML MAIL
/ \
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Unable to mount LVM partition - table too small
2010-09-22 15:39 ` Adam NEWHAM
@ 2010-09-23 6:43 ` Luca Berra
2010-09-23 17:43 ` Adam Newham
0 siblings, 1 reply; 8+ messages in thread
From: Luca Berra @ 2010-09-23 6:43 UTC (permalink / raw)
To: Adam NEWHAM; +Cc: LVM general discussion and development
On Wed, Sep 22, 2010 at 08:39:27AM -0700, Adam NEWHAM wrote:
>Thanks for looking into this. Here is the requested info, but I think something might be up with the array. I've captured additional info - I also have a screen capture from the Disk Utility but I will probably have to send that in a private email as it requires an attachment.
>Here is what comes from mdmadm --examine --scan
>
>ARRAY /dev/md0 level=raid5 num-devices=4 UUID=b5e0fcd0:cfadbb04:a5b6f22e:457f47ae
>ARRAY /dev/md0 level=raid5 num-devices=4 UUID=08558923:881d9efd:464c249d:988d2ec6
can you please show the output of mdadm --examine --scan --verbose
and to be sure:
mdadm --examine --verbose /dev/sd[abcd] /dev/sd[abcd]1
i am starting to believe your array was originally composed of
partitions contained in /dev/sda, sdb, sdc, sdd
now md saw a complete array on the whole devices, and created a
partitioned /dev/md0 reading the partition table on the first drive
so the array size is ~2.7T but the md0p1 partition is ~.9T
lvm failure:
>kernel: [ 553.685856] device-mapper: table: 252:0: md0p1 too small for target: start=384, len=5860556800, dev_size=1953520002
means just this (you tried to activate a logical volume starting at
sector 384, sized 5860556800 sectors (~2.7T) but the device is only
1953520002 sectors (~.9T)
this is consistent with the proc partitions you posted in the first
message
> 8 0 976762584 sda
> 8 1 976760001 sda1
> 8 16 976762584 sdb
> 8 17 976760001 sdb1
> 8 32 976762584 sdc
> 8 33 976760001 sdc1
> 8 48 976762584 sdd
> 8 49 976760001 sdd1
> 8 64 58605120 sde
> 8 65 56165376 sde1
> 8 66 1 sde2
> 8 69 2437120 sde5
> 9 0 2930287488 md0
> 259 0 976760001 md0p1
^^^^^^^^^
if we find valid md metadata on the partitions we can create a
mdadm.conf with
DEVICE /dev/sd[abcd]1
which will ignore the whole drive
L.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Unable to mount LVM partition - table too small
2010-09-23 6:43 ` Luca Berra
@ 2010-09-23 17:43 ` Adam Newham
2010-09-23 21:12 ` Luca Berra
0 siblings, 1 reply; 8+ messages in thread
From: Adam Newham @ 2010-09-23 17:43 UTC (permalink / raw)
To: Luca Berra; +Cc: LVM general discussion and development
Here is the info requested - it looks like on this particular boot,
/dev/sda was mapped for the OS drive and the RAID got mapped to
/dev/sd[bcde] - therefore I've dump info for /dev/sd[bcde]. So at least
for this boot sequence it looks like the RAID array doesn't align with
the device assignment.
In previous emails/logs, the RAID was /dev/sd[abcd] with the OS drive
mapped top /dev/sde. To confirm this I redid the fdisk -l & cat
/proc/partitions, /proc/mdstat
You are correct, the original array was /dev/sd[abcd] (or now
/dev/sd[bcde], with the OS drive on /dev/hda. It looks like Ubuntu maps
IDE drives to sd[x]. The 4x1TB drives are SATA, the OS drive is IDE. A
single PV/LV/VG LVM sits on top the the 3TB RAID5 with ext3 on top of
that should yield 2.7TB of usable data.
I did some Googling and came across this:
http://kevin.deldycke.com/2007/03/how-to-recover-a-raid-array-after-having-zero-ized-superblocks/
The missing superblock error is displayed when dumping /dev/sda which
makes sense as this isn't part of the RAID.
In the link above, the author recreated the RAID....however I haven't
wanted to do this (or anything else that might get categorized as
stupid) without guidance or a dd copy in case I toasted the data.
I also don't remember ever seeing /dev/md0p1 before, just a /dev/md0
root@adam-desktop:~# mdadm --examine --scan --verbose
ARRAY /dev/md0 level=raid5 num-devices=4
UUID=b5e0fcd0:cfadbb04:a5b6f22e:457f47ae
devices=/dev/sde,/dev/sdd,/dev/sdc,/dev/sdb
ARRAY /dev/md0 level=raid5 num-devices=4
UUID=08558923:881d9efd:464c249d:988d2ec6
devices=/dev/sde1,/dev/sdd1,/dev/sdc1,/dev/sdb1
root@adam-desktop:~# mdadm --examine --verbose /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 00.90.03
UUID : b5e0fcd0:cfadbb04:a5b6f22e:457f47ae
Creation Time : Sat Nov 1 22:14:18 2008
Raid Level : raid5
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Mon Sep 20 19:24:26 2010
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : e9cec762 - correct
Events : 68
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 0 0 active sync /dev/sda
0 0 8 0 0 active sync /dev/sda
1 1 8 16 1 active sync /dev/sdb
2 2 8 32 2 active sync /dev/sdc
3 3 8 48 3 active sync /dev/sdd
root@adam-desktop:~# mdadm --examine --verbose /dev/sdc
/dev/sdc:
Magic : a92b4efc
Version : 00.90.03
UUID : b5e0fcd0:cfadbb04:a5b6f22e:457f47ae
Creation Time : Sat Nov 1 22:14:18 2008
Raid Level : raid5
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Mon Sep 20 19:24:26 2010
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : e9cec774 - correct
Events : 68
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 1 8 16 1 active sync /dev/sdb
0 0 8 0 0 active sync /dev/sda
1 1 8 16 1 active sync /dev/sdb
2 2 8 32 2 active sync /dev/sdc
3 3 8 48 3 active sync /dev/sdd
root@adam-desktop:~# mdadm --examine --verbose /dev/sdd
/dev/sdd:
Magic : a92b4efc
Version : 00.90.03
UUID : b5e0fcd0:cfadbb04:a5b6f22e:457f47ae
Creation Time : Sat Nov 1 22:14:18 2008
Raid Level : raid5
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Mon Sep 20 19:24:26 2010
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : e9cec786 - correct
Events : 68
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 2 8 32 2 active sync /dev/sdc
0 0 8 0 0 active sync /dev/sda
1 1 8 16 1 active sync /dev/sdb
2 2 8 32 2 active sync /dev/sdc
3 3 8 48 3 active sync /dev/sdd
root@adam-desktop:~# mdadm --examine --verbose /dev/sde
/dev/sde:
Magic : a92b4efc
Version : 00.90.03
UUID : b5e0fcd0:cfadbb04:a5b6f22e:457f47ae
Creation Time : Sat Nov 1 22:14:18 2008
Raid Level : raid5
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Mon Sep 20 19:24:26 2010
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : e9cec798 - correct
Events : 68
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 3 8 48 3 active sync /dev/sdd
0 0 8 0 0 active sync /dev/sda
1 1 8 16 1 active sync /dev/sdb
2 2 8 32 2 active sync /dev/sdc
3 3 8 48 3 active sync /dev/sdd
root@adam-desktop:~# fdisk -l
Disk /dev/sda: 60.0 GB, 60011642880 bytes
255 heads, 63 sectors/track, 7296 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005cd42
Device Boot Start End Blocks Id System
/dev/sda1 * 1 6993 56165376 83 Linux
/dev/sda2 6993 7296 2437121 5 Extended
/dev/sda5 6993 7296 2437120 82 Linux swap / Solaris
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 1 121601 976760001 fd Linux raid
autodetect
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdc1 1 121601 976760001 fd Linux raid
autodetect
Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdd1 1 121601 976760001 fd Linux raid
autodetect
Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sde1 1 121601 976760001 fd Linux raid
autodetect
Disk /dev/md0: 3000.6 GB, 3000614387712 bytes
255 heads, 63 sectors/track, 364803 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 196608 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/md0p1 1 121601 976760001 fd Linux raid
autodetect
Partition 1 does not start on physical sector boundary.
root@adam-desktop:~# cat /proc/partitions
major minor #blocks name
8 0 58605120 sda
8 1 56165376 sda1
8 2 1 sda2
8 5 2437120 sda5
8 16 976762584 sdb
8 17 976760001 sdb1
8 32 976762584 sdc
8 33 976760001 sdc1
8 48 976762584 sdd
8 49 976760001 sdd1
8 64 976762584 sde
8 65 976760001 sde1
9 0 2930287488 md0
259 0 976760001 md0p1
root@adam-desktop:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md0 : active raid5 sde[3] sdb[0] sdc[1] sdd[2]
2930287488 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Unable to mount LVM partition - table too small
2010-09-23 17:43 ` Adam Newham
@ 2010-09-23 21:12 ` Luca Berra
2010-09-24 16:25 ` Adam Newham
0 siblings, 1 reply; 8+ messages in thread
From: Luca Berra @ 2010-09-23 21:12 UTC (permalink / raw)
To: Adam Newham; +Cc: LVM general discussion and development
>
> I also don't remember ever seeing /dev/md0p1 before, just a /dev/md0
i guessed as much
> root@adam-desktop:~# mdadm --examine --scan --verbose
> ARRAY /dev/md0 level=raid5 num-devices=4
> UUID=b5e0fcd0:cfadbb04:a5b6f22e:457f47ae
> devices=/dev/sde,/dev/sdd,/dev/sdc,/dev/sdb
the above is the wrong md0 using whole disks
this one is the correct one
> ARRAY /dev/md0 level=raid5 num-devices=4
> UUID=08558923:881d9efd:464c249d:988d2ec6
> devices=/dev/sde1,/dev/sdd1,/dev/sdc1,/dev/sdb1
>
> root@adam-desktop:~# mdadm --examine --verbose /dev/sdb
> root@adam-desktop:~# mdadm --examine --verbose /dev/sdc
> root@adam-desktop:~# mdadm --examine --verbose /dev/sdd
> root@adam-desktop:~# mdadm --examine --verbose /dev/sde
you did not dump the partitions, but i guess you couldn't since the
device files disappeared
> root@adam-desktop:~# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md0 : active raid5 sde[3] sdb[0] sdc[1] sdd[2]
> 2930287488 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
>
> unused devices: <none>
>
put
DEVICE /dev/sd?1
ARRAY /dev/md0 level=raid5 num-devices=4 UUID=08558923:881d9efd:464c249d:988d2ec6
into /etc/mdadm.conf (or /etc/mdadm/mdadm.conf, whatever ubuntu places
it) and reboot, you should be able to see your data.
after this ask on linux-raid ml for advice on how to zero the duplicate
superblock.
L.
--
Luca Berra -- bluca@comedia.it
Communication Media & Services S.r.l.
/"\
\ / ASCII RIBBON CAMPAIGN
X AGAINST HTML MAIL
/ \
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Unable to mount LVM partition - table too small
2010-09-23 21:12 ` Luca Berra
@ 2010-09-24 16:25 ` Adam Newham
0 siblings, 0 replies; 8+ messages in thread
From: Adam Newham @ 2010-09-24 16:25 UTC (permalink / raw)
To: linux-lvm
Many thanks for the assistance. With the this I should be pointed me
in the right direction. As this now looks like a RAID related versus LVM
issue I'll continue this on the RAID message list. For those interested,
the thread can be found here:
http://marc.info/?l=linux-raid&m=128528502831454&w=2
Once again, thanks
On 09/23/2010 02:12 PM, Luca Berra wrote:
>> I also don't remember ever seeing /dev/md0p1 before, just a /dev/md0
> i guessed as much
>
>> root@adam-desktop:~# mdadm --examine --scan --verbose
>> ARRAY /dev/md0 level=raid5 num-devices=4
>> UUID=b5e0fcd0:cfadbb04:a5b6f22e:457f47ae
>> devices=/dev/sde,/dev/sdd,/dev/sdc,/dev/sdb
> the above is the wrong md0 using whole disks
>
> this one is the correct one
>> ARRAY /dev/md0 level=raid5 num-devices=4
>> UUID=08558923:881d9efd:464c249d:988d2ec6
>> devices=/dev/sde1,/dev/sdd1,/dev/sdc1,/dev/sdb1
>>
>> root@adam-desktop:~# mdadm --examine --verbose /dev/sdb
>> root@adam-desktop:~# mdadm --examine --verbose /dev/sdc
>> root@adam-desktop:~# mdadm --examine --verbose /dev/sdd
>> root@adam-desktop:~# mdadm --examine --verbose /dev/sde
> you did not dump the partitions, but i guess you couldn't since the
> device files disappeared
>
>> root@adam-desktop:~# cat /proc/mdstat
>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
>> [raid4] [raid10]
>> md0 : active raid5 sde[3] sdb[0] sdc[1] sdd[2]
>> 2930287488 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
>>
>> unused devices:<none>
>>
> put
> DEVICE /dev/sd?1
> ARRAY /dev/md0 level=raid5 num-devices=4 UUID=08558923:881d9efd:464c249d:988d2ec6
>
> into /etc/mdadm.conf (or /etc/mdadm/mdadm.conf, whatever ubuntu places
> it) and reboot, you should be able to see your data.
>
> after this ask on linux-raid ml for advice on how to zero the duplicate
> superblock.
>
> L.
>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2010-09-24 16:25 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-09-07 17:34 [linux-lvm] Unable to mount LVM partition - table too small Adam Newham
2010-09-20 20:52 ` Adam Newham
2010-09-22 6:21 ` Luca Berra
2010-09-22 15:39 ` Adam NEWHAM
2010-09-23 6:43 ` Luca Berra
2010-09-23 17:43 ` Adam Newham
2010-09-23 21:12 ` Luca Berra
2010-09-24 16:25 ` Adam Newham
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.