* Found duplicate PV: using /dev/sda3 not /dev/md1
@ 2014-06-14 21:58 Rainer Fügenstein
2014-06-15 18:35 ` Rainer Fügenstein
0 siblings, 1 reply; 13+ messages in thread
From: Rainer Fügenstein @ 2014-06-14 21:58 UTC (permalink / raw)
To: linux-raid
hi,
today I replaced two old 750GB disks in a RAID1 with two new 1TB
disks:
/dev/md0 (sda1, sdb1) (root filesystem)
/dev/md1 (sda3, sdb3) LVM with /var (and others)
- remove 750GB sdb, add new 1TB sdb, fdisk & sync sda -> sdb md0 and md1
- remove 750GB sda, add new 1TB sda, fdisk & sync sdb -> sda md0 (md1 pending)
here I had some problems with grub, system did not boot from new sdb,
so I switched SATA ports on both disks and worked on grub & bios to
finally boot from sdb (maybe this was were the sh!t hit the fan)
on next boot, LVM reported the following:
Found duplicate PV efP9J0elVh1bokdArXcwsCB0KW0Yy9Ya: using /dev/sda3 not /dev/md1
this doesn't look good, does it?
is there any way to convince LVM to use md1 instead of sda3,
preferrably without shutting down/rebooting the server? (remember,
/var is an LVM volume) otherwise, I need to drive 70km, but that's OK
if necessary.
[root@gateway ~]# pvdisplay
Found duplicate PV efP9J0elVh1bokdArXcwsCB0KW0Yy9Ya: using /dev/sda3 not /dev/md1
--- Physical volume ---
PV Name /dev/sda3
VG Name vg_gateway
PV Size 688.64 GB / not usable 1.56 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 176291
Free PE 154787
Allocated PE 21504
PV UUID efP9J0-elVh-1bok-dArX-cwsC-B0KW-0Yy9Ya
[root@gateway ~]# mdadm --misc -D /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Thu May 26 02:42:30 2011
Raid Level : raid1
Array Size : 8385792 (8.00 GiB 8.59 GB)
Used Dev Size : 8385792 (8.00 GiB 8.59 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sat Jun 14 23:55:03 2014
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : e305eb48:ccdead21:b2113316:370d4eaa
Events : 0.138466
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
[root@gateway ~]# mdadm --misc -D /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Mon Jun 6 23:12:19 2011
Raid Level : raid1
Array Size : 722089536 (688.64 GiB 739.42 GB)
Used Dev Size : 722089536 (688.64 GiB 739.42 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Sat Jun 14 23:56:06 2014
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 96% complete
UUID : 7b278c55:0642253d:7f816378:91d6e631
Events : 0.300608
Number Major Minor RaidDevice State
2 8 3 0 spare rebuilding /dev/sda3
1 8 19 1 active sync /dev/sdb3
[root@gateway ~]# lvdisplay
Found duplicate PV efP9J0elVh1bokdArXcwsCB0KW0Yy9Ya: using /dev/sda3 not /dev/md1
--- Logical volume ---
LV Name /dev/vg_gateway/lv_home
VG Name vg_gateway
LV UUID yeBeOp-yho6-rIvS-sGhr-lpWL-T3ss-T1L9tk
LV Write Access read/write
LV Status available
# open 1
LV Size 12.00 GB
Current LE 3072
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Name /dev/vg_gateway/lv_var
VG Name vg_gateway
LV UUID j5P82N-irO3-AS1k-TQtk-KwSG-PZWS-j8jsx2
LV Write Access read/write
LV Status available
# open 1
LV Size 12.00 GB
Current LE 3072
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Name /dev/vg_gateway/lv_data
VG Name vg_gateway
LV UUID ihCove-bOaa-HfYn-KNcU-5P4a-OYzO-XrbzEU
LV Write Access read/write
LV Status available
# open 1
LV Size 60.00 GB
Current LE 15360
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
tnx in advance.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Found duplicate PV: using /dev/sda3 not /dev/md1
2014-06-14 21:58 Found duplicate PV: using /dev/sda3 not /dev/md1 Rainer Fügenstein
@ 2014-06-15 18:35 ` Rainer Fügenstein
2014-06-16 2:46 ` Phil Turmel
0 siblings, 1 reply; 13+ messages in thread
From: Rainer Fügenstein @ 2014-06-15 18:35 UTC (permalink / raw)
To: linux-raid
status update:
the server is running under a slight load, some incoming mails and
logging (messages, maillog). no data corruption so far.
my assumption: since all writes go to /dev/sda3 via LVM, no writes go
to the raid, therefore it doesn't sync, therefore no data corruption.
cu
RF> hi,
RF> today I replaced two old 750GB disks in a RAID1 with two new 1TB
RF> disks:
RF> /dev/md0 (sda1, sdb1) (root filesystem)
RF> /dev/md1 (sda3, sdb3) LVM with /var (and others)
RF> - remove 750GB sdb, add new 1TB sdb, fdisk & sync sda -> sdb md0 and md1
RF> - remove 750GB sda, add new 1TB sda, fdisk & sync sdb -> sda md0 (md1 pending)
RF> here I had some problems with grub, system did not boot from new sdb,
RF> so I switched SATA ports on both disks and worked on grub & bios to
RF> finally boot from sdb (maybe this was were the sh!t hit the fan)
RF> on next boot, LVM reported the following:
RF> Found duplicate PV efP9J0elVh1bokdArXcwsCB0KW0Yy9Ya: using /dev/sda3 not /dev/md1
RF> this doesn't look good, does it?
RF> is there any way to convince LVM to use md1 instead of sda3,
RF> preferrably without shutting down/rebooting the server? (remember,
RF> /var is an LVM volume) otherwise, I need to drive 70km, but that's OK
RF> if necessary.
RF> [root@gateway ~]# pvdisplay
RF> Found duplicate PV efP9J0elVh1bokdArXcwsCB0KW0Yy9Ya: using /dev/sda3 not /dev/md1
RF> --- Physical volume ---
RF> PV Name /dev/sda3
RF> VG Name vg_gateway
RF> PV Size 688.64 GB / not usable 1.56 MB
RF> Allocatable yes
RF> PE Size (KByte) 4096
RF> Total PE 176291
RF> Free PE 154787
RF> Allocated PE 21504
RF> PV UUID efP9J0-elVh-1bok-dArX-cwsC-B0KW-0Yy9Ya
RF> [root@gateway ~]# mdadm --misc -D /dev/md0
RF> /dev/md0:
RF> Version : 0.90
RF> Creation Time : Thu May 26 02:42:30 2011
RF> Raid Level : raid1
RF> Array Size : 8385792 (8.00 GiB 8.59 GB)
RF> Used Dev Size : 8385792 (8.00 GiB 8.59 GB)
RF> Raid Devices : 2
RF> Total Devices : 2
RF> Preferred Minor : 0
RF> Persistence : Superblock is persistent
RF> Update Time : Sat Jun 14 23:55:03 2014
RF> State : clean
RF> Active Devices : 2
RF> Working Devices : 2
RF> Failed Devices : 0
RF> Spare Devices : 0
RF> UUID : e305eb48:ccdead21:b2113316:370d4eaa
RF> Events : 0.138466
RF> Number Major Minor RaidDevice State
RF> 0 8 1 0 active sync /dev/sda1
RF> 1 8 17 1 active sync /dev/sdb1
RF> [root@gateway ~]# mdadm --misc -D /dev/md1
RF> /dev/md1:
RF> Version : 0.90
RF> Creation Time : Mon Jun 6 23:12:19 2011
RF> Raid Level : raid1
RF> Array Size : 722089536 (688.64 GiB 739.42 GB)
RF> Used Dev Size : 722089536 (688.64 GiB 739.42 GB)
RF> Raid Devices : 2
RF> Total Devices : 2
RF> Preferred Minor : 1
RF> Persistence : Superblock is persistent
RF> Update Time : Sat Jun 14 23:56:06 2014
RF> State : clean, degraded, recovering
RF> Active Devices : 1
RF> Working Devices : 2
RF> Failed Devices : 0
RF> Spare Devices : 1
RF> Rebuild Status : 96% complete
RF> UUID : 7b278c55:0642253d:7f816378:91d6e631
RF> Events : 0.300608
RF> Number Major Minor RaidDevice State
RF> 2 8 3 0 spare rebuilding /dev/sda3
RF> 1 8 19 1 active sync /dev/sdb3
RF> [root@gateway ~]# lvdisplay
RF> Found duplicate PV efP9J0elVh1bokdArXcwsCB0KW0Yy9Ya: using /dev/sda3 not /dev/md1
RF> --- Logical volume ---
RF> LV Name /dev/vg_gateway/lv_home
RF> VG Name vg_gateway
RF> LV UUID yeBeOp-yho6-rIvS-sGhr-lpWL-T3ss-T1L9tk
RF> LV Write Access read/write
RF> LV Status available
RF> # open 1
RF> LV Size 12.00 GB
RF> Current LE 3072
RF> Segments 1
RF> Allocation inherit
RF> Read ahead sectors auto
RF> - currently set to 256
RF> Block device 253:0
RF> --- Logical volume ---
RF> LV Name /dev/vg_gateway/lv_var
RF> VG Name vg_gateway
RF> LV UUID j5P82N-irO3-AS1k-TQtk-KwSG-PZWS-j8jsx2
RF> LV Write Access read/write
RF> LV Status available
RF> # open 1
RF> LV Size 12.00 GB
RF> Current LE 3072
RF> Segments 1
RF> Allocation inherit
RF> Read ahead sectors auto
RF> - currently set to 256
RF> Block device 253:1
RF> --- Logical volume ---
RF> LV Name /dev/vg_gateway/lv_data
RF> VG Name vg_gateway
RF> LV UUID ihCove-bOaa-HfYn-KNcU-5P4a-OYzO-XrbzEU
RF> LV Write Access read/write
RF> LV Status available
RF> # open 1
RF> LV Size 60.00 GB
RF> Current LE 15360
RF> Segments 1
RF> Allocation inherit
RF> Read ahead sectors auto
RF> - currently set to 256
RF> Block device 253:2
RF> tnx in advance.
RF> --
RF> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
RF> the body of a message to majordomo@vger.kernel.org
RF> More majordomo info at http://vger.kernel.org/majordomo-info.html
--- NOT sent from an iPhone
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Found duplicate PV: using /dev/sda3 not /dev/md1
2014-06-15 18:35 ` Rainer Fügenstein
@ 2014-06-16 2:46 ` Phil Turmel
[not found] ` <19411862.20140616155327@oudeis.org>
0 siblings, 1 reply; 13+ messages in thread
From: Phil Turmel @ 2014-06-16 2:46 UTC (permalink / raw)
To: Rainer Fügenstein, linux-raid
Hi Rainer,
[please don't top-post, please *do* trim your replies]
On 06/15/2014 02:35 PM, Rainer Fügenstein wrote:
> today I replaced two old 750GB disks in a RAID1 with two new 1TB
> disks:
>
> /dev/md0 (sda1, sdb1) (root filesystem)
> /dev/md1 (sda3, sdb3) LVM with /var (and others)
> /dev/md1:
> Version : 0.90
> Creation Time : Mon Jun 6 23:12:19 2011
> Raid Level : raid1
Your problem is that you are using version 0.90 metadata. It and v1.0
put the superblock at the end of the member devices, and it cannot be
found if the device size changes. Plus, the data starts at sector zero,
so if the MD superblock isn't found, the device content is identified
without the raid layer.
These metadata formats should only be used with boot partitions that
need to be readable without assembling the array.
> status update:
>
> the server is running under a slight load, some incoming mails and
> logging (messages, maillog). no data corruption so far.
Just luck. . .
> my assumption: since all writes go to /dev/sda3 via LVM, no writes go
> to the raid, therefore it doesn't sync, therefore no data corruption.
Your PV is /dev/sda3, and it is also the rebuild target of your md1
array, so writes to it are coming from two directions. Your lack of
corruption at the moment is a lucky fluke.
You need to stop /dev/md1 as soon as possible to eliminate the chances
of further corruption.
If you must keep the server online, I recommend the following steps:
1) Create a new /dev/md1 array with v1.2 metadata, and just /dev/sdb3.
Use "missing" for the second copy. Zero the beginning of /dev/sdb3 to
make sure it is not misidentified as the old PV.
2) Create a new PV on the new /dev/md1 and add it to your existing
volume group.
3) Use "pvmove" to migrate the LVs in your volume group from /dev/sda3
to the new /dev/md1.
4) When the /dev/sda3 PV is empty, remove it from the volume group and
pvremove its signature.
5) Add the cleared /dev/sda3 to the new /dev/md1 and let it rebuild.
The v1.2 metadata will prevent this problem from happening again.
6) Update your mdadm.conf file, lvm.conf file (if necessary) and then
update your initramfs.
HTH,
Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Found duplicate PV: using /dev/sda3 not /dev/md1
[not found] ` <19411862.20140616155327@oudeis.org>
@ 2014-06-16 14:35 ` Phil Turmel
2014-06-16 18:20 ` Re[2]: " Rainer Fügenstein
0 siblings, 1 reply; 13+ messages in thread
From: Phil Turmel @ 2014-06-16 14:35 UTC (permalink / raw)
To: Rainer Fügenstein; +Cc: linux-raid
Hi Rainer,
[CC to list added back. Use reply-to-all on kernel.org lists]
On 06/16/2014 09:53 AM, Rainer Fügenstein wrote:
>
> PT> Your problem is that you are using version 0.90 metadata. It and v1.0
> PT> put the superblock at the end of the member devices, and it cannot be
> PT> found if the device size changes. Plus, the data starts at sector zero,
> PT> so if the MD superblock isn't found, the device content is identified
> PT> without the raid layer.
>
> If I understand it correctly, the error happened (apart from the
> superblock version) because of the reboot(s) before a) md1 was grown
> and/or/maybe b) sync was finished.
Probably, but v0.9 arrays are vulnerable to whole-disk vs. partition
misidentification, and should never be used on a partition that
encompasses the end of a disk.
> PT> These metadata formats should only be used with boot partitions that
> PT> need to be readable without assembling the array.
> this was how the centos installer created the raid (with 0.90). will
> keep it that way on md0 (which holds /boot and root).
>
> PT> You need to stop /dev/md1 as soon as possible to eliminate the chances
> PT> of further corruption.
>
> [root@gateway home]# mdadm --manage --stop /dev/md1
> mdadm: fail to stop array /dev/md1: Device or resource busy
> Perhaps a running process, mounted filesystem or active volume group?
>
> [root@gateway home]# lsof | grep md1
> md1_raid1 461 root cwd DIR 9,0 4096 2 /
> md1_raid1 461 root rtd DIR 9,0 4096 2 /
> md1_raid1 461 root txt unknown /proc/461/exe
>
> since nothing else than md1_raid1 is accessing /dev/md1, I assume it
> is safe to --force --stop?
What is running as process ID 461? That needs to be killed off, and any
mount involving /dev/md1 stopped.
> PT> If you must keep the server online, I recommend the following steps:
>
> big thanks, your strategy is much better than the one I had planned
> (and fills some knowledge gaps).
Your situation is the primary reason I recommend LVM.
Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re[2]: Found duplicate PV: using /dev/sda3 not /dev/md1
2014-06-16 14:35 ` Phil Turmel
@ 2014-06-16 18:20 ` Rainer Fügenstein
2014-06-16 20:15 ` Phil Turmel
0 siblings, 1 reply; 13+ messages in thread
From: Rainer Fügenstein @ 2014-06-16 18:20 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid
hi,
>> [root@gateway home]# lsof | grep md1
>> md1_raid1 461 root cwd DIR 9,0 4096 2 /
>> md1_raid1 461 root rtd DIR 9,0 4096 2 /
>> md1_raid1 461 root txt unknown /proc/461/exe
PT> What is running as process ID 461? That needs to be killed off, and any
PT> mount involving /dev/md1 stopped.
process ID 461 is [md_raid1] and can't be killed via kill or kill -9:
root 461 2.2 0.0 0 0 ? S< Jun14 61:09 [md1_raid1]
there is no "normal" mount on /dev/md1, maybe LVM has its fingers on
it although it should not?
[root@gateway ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md0 7.8G 4.8G 2.7G 65% /
/dev/mapper/vg_gateway-lv_var
12G 1.1G 11G 9% /var
/dev/mapper/vg_gateway-lv_home
12G 4.1G 7.2G 37% /home
/dev/mapper/vg_gateway-lv_data
60G 42G 15G 74% /data
tmpfs 989M 0 989M 0% /dev/shm
[root@gateway ~]# umount /dev/md1
umount: /dev/md1: not mounted
tnx & cu
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Found duplicate PV: using /dev/sda3 not /dev/md1
2014-06-16 18:20 ` Re[2]: " Rainer Fügenstein
@ 2014-06-16 20:15 ` Phil Turmel
2014-06-16 20:28 ` Re[2]: " Rainer Fügenstein
0 siblings, 1 reply; 13+ messages in thread
From: Phil Turmel @ 2014-06-16 20:15 UTC (permalink / raw)
To: Rainer Fügenstein; +Cc: linux-raid
On 06/16/2014 02:20 PM, Rainer Fügenstein wrote:
> [root@gateway ~]# umount /dev/md1
> umount: /dev/md1: not mounted
Yep, use mdadm --stop --force.
If LVM is the culprit, you'll want to stop that right away.
Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re[2]: Found duplicate PV: using /dev/sda3 not /dev/md1
2014-06-16 20:15 ` Phil Turmel
@ 2014-06-16 20:28 ` Rainer Fügenstein
2014-06-16 20:54 ` Phil Turmel
0 siblings, 1 reply; 13+ messages in thread
From: Rainer Fügenstein @ 2014-06-16 20:28 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid
PT> Yep, use mdadm --stop --force.
unfortunately, same result:
[root@gateway ~]# mdadm --stop --force /dev/md1
mdadm: fail to stop array /dev/md1: Device or resource busy
Perhaps a running process, mounted filesystem or active volume group?
cu
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Found duplicate PV: using /dev/sda3 not /dev/md1
2014-06-16 20:28 ` Re[2]: " Rainer Fügenstein
@ 2014-06-16 20:54 ` Phil Turmel
2014-06-17 1:27 ` Re[2]: " Rainer Fügenstein
0 siblings, 1 reply; 13+ messages in thread
From: Phil Turmel @ 2014-06-16 20:54 UTC (permalink / raw)
To: Rainer Fügenstein; +Cc: linux-raid
On 06/16/2014 04:28 PM, Rainer Fügenstein wrote:
>
> PT> Yep, use mdadm --stop --force.
>
> unfortunately, same result:
>
> [root@gateway ~]# mdadm --stop --force /dev/md1
> mdadm: fail to stop array /dev/md1: Device or resource busy
> Perhaps a running process, mounted filesystem or active volume group?
Look in /sys/block/md1/holders/
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re[2]: Found duplicate PV: using /dev/sda3 not /dev/md1
2014-06-16 20:54 ` Phil Turmel
@ 2014-06-17 1:27 ` Rainer Fügenstein
2014-06-17 11:57 ` Phil Turmel
0 siblings, 1 reply; 13+ messages in thread
From: Rainer Fügenstein @ 2014-06-17 1:27 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid
phil,
PT> Look in /sys/block/md1/holders/
[root@gateway md1]# cd holders/
[root@gateway holders]# ls -lh
total 0
lrwxrwxrwx 1 root root 0 Jun 16 23:11 dm-0 -> ../../../block/dm-0
lrwxrwxrwx 1 root root 0 Jun 16 23:11 dm-1 -> ../../../block/dm-1
lrwxrwxrwx 1 root root 0 Jun 16 23:11 dm-2 -> ../../../block/dm-2
[root@gateway holders]#
not sure what that means?
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Found duplicate PV: using /dev/sda3 not /dev/md1
2014-06-17 1:27 ` Re[2]: " Rainer Fügenstein
@ 2014-06-17 11:57 ` Phil Turmel
2014-06-17 12:00 ` Phil Turmel
2014-06-17 14:12 ` Re[2]: " Rainer Fügenstein
0 siblings, 2 replies; 13+ messages in thread
From: Phil Turmel @ 2014-06-17 11:57 UTC (permalink / raw)
To: Rainer Fügenstein; +Cc: linux-raid
On 06/16/2014 09:27 PM, Rainer Fügenstein wrote:
> phil,
>
> PT> Look in /sys/block/md1/holders/
>
> [root@gateway md1]# cd holders/
> [root@gateway holders]# ls -lh
> total 0
> lrwxrwxrwx 1 root root 0 Jun 16 23:11 dm-0 -> ../../../block/dm-0
> lrwxrwxrwx 1 root root 0 Jun 16 23:11 dm-1 -> ../../../block/dm-1
> lrwxrwxrwx 1 root root 0 Jun 16 23:11 dm-2 -> ../../../block/dm-2
> [root@gateway holders]#
>
> not sure what that means?
It means that LVM is indeed using the scrambled array for those devices.
Use "dmsetup ls" to identify them.
That also means my previous advice is wrong--you can't just shut it off.
It would be helpful if you could run "lsdrv" [1] on your system and
report the results here.
Phil
[1] http://github.com/pturmel/lsdrv
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Found duplicate PV: using /dev/sda3 not /dev/md1
2014-06-17 11:57 ` Phil Turmel
@ 2014-06-17 12:00 ` Phil Turmel
2014-06-17 14:12 ` Re[2]: " Rainer Fügenstein
1 sibling, 0 replies; 13+ messages in thread
From: Phil Turmel @ 2014-06-17 12:00 UTC (permalink / raw)
To: Rainer Fügenstein; +Cc: linux-raid
Note to all:
On 06/17/2014 07:57 AM, Phil Turmel wrote:
> That also means my previous advice is wrong--you can't just shut it off.
> It would be helpful if you could run "lsdrv" [1] on your system and
> report the results here.
>
> Phil
>
> [1] http://github.com/pturmel/lsdrv
I'm going to be on the road all day today and will not be able to help
again until tomorrow. :-(
Phil
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re[2]: Found duplicate PV: using /dev/sda3 not /dev/md1
2014-06-17 11:57 ` Phil Turmel
2014-06-17 12:00 ` Phil Turmel
@ 2014-06-17 14:12 ` Rainer Fügenstein
2014-08-06 12:49 ` Re[3]: " Rainer Fügenstein
1 sibling, 1 reply; 13+ messages in thread
From: Rainer Fügenstein @ 2014-06-17 14:12 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid
hi,
PT> It means that LVM is indeed using the scrambled array for those devices.
PT> Use "dmsetup ls" to identify them.
[root@gateway ~]# dmsetup ls
vg_gateway-lv_data (253, 2)
vg_gateway-lv_var (253, 1)
vg_gateway-lv_home (253, 0)
[root@gateway ~]#
those LVs are in use and on /dev/sda3 which LVM claims to use instead
of /dev/md1
PT> That also means my previous advice is wrong--you can't just shut it off.
PT> It would be helpful if you could run "lsdrv" [1] on your system and
PT> report the results here.
[root@gateway lsdrv-master]# ./lsdrv
Creating device for node 11:0 ...
Creating device for node 253:2 ...
Creating device for node 253:1 ...
Creating device for node 253:0 ...
Creating device for node 9:1 ...
Creating device for node 9:0 ...
Creating device for node 8:16 ...
Creating device for node 8:19 ...
Creating device for node 8:18 ...
Creating device for node 8:17 ...
Creating device for node 8:0 ...
Creating device for node 8:3 ...
Creating device for node 8:2 ...
Creating device for node 8:1 ...
Creating device for node 1:15 ...
Creating device for node 1:14 ...
Creating device for node 1:13 ...
Creating device for node 1:12 ...
Creating device for node 1:11 ...
Creating device for node 1:10 ...
Creating device for node 1:9 ...
Creating device for node 1:8 ...
Creating device for node 1:7 ...
Creating device for node 1:6 ...
Creating device for node 1:5 ...
Creating device for node 1:4 ...
Creating device for node 1:3 ...
Creating device for node 1:2 ...
Creating device for node 1:1 ...
Creating device for node 1:0 ...
**Warning** The following utility(ies) failed to execute:
sginfo
Some information may be missing.
PCI [sata_nv] 00:08.0 IDE interface: NVIDIA Corporation MCP61 SATA Controller (rev a2)
âscsi 0:0:0:0 ATA WDC WD10EFRX-68P {WD-WCC4JF2PUS6H}
ââsda 931.51g [8:0] Empty/Unknown
â âsda1 8.39g [8:1] MD raid1 (0/2) (w/ sdb1) in_sync {48eb05e3-21ad-decc-1633-11b2aa4e0d37}
â ââmd0 8.00g [9:0] MD v0.90 raid1 (2) clean {e305eb48:ccdead21:b2113316:370d4eaa}
â â â ext3 {c9da1652-6fb4-4d70-91ed-e5f8aacf00da}
â â âMounted as /dev/root @ /
â âsda2 1.87g [8:2] swap
â âsda3 921.25g [8:3] PV linux_raid_member 84.00G used, 604.64G free {efP9J0-elVh-1bok-dArX-cwsC-B0KW-0Yy9Ya}
â âVG vg_gateway 688.64G 604.64G free {3grrBQ-YewZ-561L-TjEZ-agCR-d4zc-euFzDt}
â âdm-2 60.00g [253:2] LV lv_data ext3 {7d168761-8de3-41ef-a037-a212bd4bec58}
â ââMounted as /dev/vg_gateway/lv_data @ /data
â âdm-0 12.00g [253:0] LV lv_home ext3 {19fa9433-1692-4a07-a2c9-2f6d68f01fe5}
â ââMounted as /dev/vg_gateway/lv_home @ /home
â âdm-1 12.00g [253:1] LV lv_var ext3 {a3a2776c-abd5-4c5f-bbe7-08a984a266c5}
â âMounted as /dev/vg_gateway/lv_var @ /var
âscsi 1:0:0:0 ATA WDC WD10EFRX-68P {WD-WCC4J8A6V0FW}
âsdb 931.51g [8:16] Empty/Unknown
âsdb1 8.39g [8:17] MD raid1 (1/2) (w/ sda1) in_sync {48eb05e3-21ad-decc-1633-11b2aa4e0d37}
ââmd0 8.00g [9:0] MD v0.90 raid1 (2) clean {e305eb48:ccdead21:b2113316:370d4eaa}
â ext3 {c9da1652-6fb4-4d70-91ed-e5f8aacf00da}
âsdb2 1.87g [8:18] swap
âsdb3 921.25g [8:19] MD raid1 (1/2) (w/ sda3) in_sync {558c277b-3d25-4206-7863-817f31e6d691}
âmd1 688.64g [9:1] MD v0.90 raid1 (2) clean {7b278c55:0642253d:7f816378:91d6e631}
â PV LVM2_member (inactive)
âdm-0 12.00g [253:0] LV lv_home ext3 {19fa9433-1692-4a07-a2c9-2f6d68f01fe5}
PCI [sata_nv] 00:08.1 IDE interface: NVIDIA Corporation MCP61 SATA Controller (rev a2)
âscsi 3:0:0:0 ATAPI iHAS124 B
âsr0 1.00g [11:0] Empty/Unknown
Other Block Devices
âram0 16.00m [1:0] Empty/Unknown
âram1 16.00m [1:1] Empty/Unknown
âram2 16.00m [1:2] Empty/Unknown
âram3 16.00m [1:3] Empty/Unknown
âram4 16.00m [1:4] Empty/Unknown
âram5 16.00m [1:5] Empty/Unknown
âram6 16.00m [1:6] Empty/Unknown
âram7 16.00m [1:7] Empty/Unknown
âram8 16.00m [1:8] Empty/Unknown
âram9 16.00m [1:9] Empty/Unknown
âram10 16.00m [1:10] Empty/Unknown
âram11 16.00m [1:11] Empty/Unknown
âram12 16.00m [1:12] Empty/Unknown
âram13 16.00m [1:13] Empty/Unknown
âram14 16.00m [1:14] Empty/Unknown
âram15 16.00m [1:15] Empty/Unknown
looks weird, obviously some charset problem, hope that all the needed
information is there anyway.
tnx & cu
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re[3]: Found duplicate PV: using /dev/sda3 not /dev/md1
2014-06-17 14:12 ` Re[2]: " Rainer Fügenstein
@ 2014-08-06 12:49 ` Rainer Fügenstein
0 siblings, 0 replies; 13+ messages in thread
From: Rainer Fügenstein @ 2014-08-06 12:49 UTC (permalink / raw)
To: Rainer Fügenstein; +Cc: Phil Turmel, linux-raid
Hi,
I assume we hit a dead end here?
RF> [root@gateway ~]# dmsetup ls
RF> vg_gateway-lv_data (253, 2)
RF> vg_gateway-lv_var (253, 1)
RF> vg_gateway-lv_home (253, 0)
RF> [root@gateway ~]#
[...] (see my last mail regarding this topic)
I wonder what will happen if I mdadmin --resize /dev/md1? presumably
it will overwrite this stray LVM superblock and raid volume content
will be the same as it was a few weeks ago, when this problem first
occured?
tnx
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2014-08-06 12:49 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-14 21:58 Found duplicate PV: using /dev/sda3 not /dev/md1 Rainer Fügenstein
2014-06-15 18:35 ` Rainer Fügenstein
2014-06-16 2:46 ` Phil Turmel
[not found] ` <19411862.20140616155327@oudeis.org>
2014-06-16 14:35 ` Phil Turmel
2014-06-16 18:20 ` Re[2]: " Rainer Fügenstein
2014-06-16 20:15 ` Phil Turmel
2014-06-16 20:28 ` Re[2]: " Rainer Fügenstein
2014-06-16 20:54 ` Phil Turmel
2014-06-17 1:27 ` Re[2]: " Rainer Fügenstein
2014-06-17 11:57 ` Phil Turmel
2014-06-17 12:00 ` Phil Turmel
2014-06-17 14:12 ` Re[2]: " Rainer Fügenstein
2014-08-06 12:49 ` Re[3]: " Rainer Fügenstein
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.