* [linux-lvm] Problems with vgimport after software raid initialisation failed.
@ 2003-09-30 15:00 SystemError
2003-10-02 4:23 ` Heinz J . Mauelshagen
0 siblings, 1 reply; 5+ messages in thread
From: SystemError @ 2003-09-30 15:00 UTC (permalink / raw)
To: linux-lvm
Hello out there,
after I migrating my precious volume group "datavg" from unmirrored
disks to linux software raid devices I ran into serios problems.
(Although I fear the biggest problem here was my own incompetence...)
First I moved the data from the old unmirrored disks away, using pvmove.
No Problems so far.
At a certain point I had emptied the 2 PVs "/dev/hdh" and "/dev/hdf".
So I did a vgreduce on them, then created a new raid1
"/dev/md4" (containing both "hdf" and "hdh") and added it to my
volume group "datavg" using pvcreate(->"/dev/md4") and vgextend.
No Problems so far.
Everything looked soooo perfect and so I decided to reboot the system...
At this point things started to go wrong, during the boot sequence
"/dev/md4" was not automatically activated and suddenly the PV
"/dev/hdf" showed up in "datavg", "/dev/md4" was gone.
Unfortunately I paniced and ran a vgexport on "datavg", fixed the broken
initialisation of "/dev/md4", and rebooted again.
This was a probably a baaad idea.
Shame upon me.
Now my pvscan looks like this:
"
[root@athens root]# pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- ACTIVE PV "/dev/md2" of VG "sysvg" [16 GB / 10 GB free]
pvscan -- inactive PV "/dev/md3" is in EXPORTED VG "datavg" [132.25 GB /
0 free]
pvscan -- inactive PV "/dev/md4" is associated to unknown VG "datavg"
(run vgscan)
pvscan -- WARNING: physical volume "/dev/hdh" belongs to a meta device
pvscan -- inactive PV "/dev/hdf" is in EXPORTED VG "datavg" [57.12 GB /
50.88 GB free]
pvscan -- total: 5 [262.97 GB] / in use: 5 [262.97 GB] / in no VG: 0 [0]
"
Or with the -u option:
"
[root@athens root]# pvscan -u
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- ACTIVE PV "/dev/md2" with UUID
"g6Au3J-2C4H-Ifjo-iESu-4yp8-aRQv-ozChyW" of VG "sysvg" [16 GB /
10 GB free]
pvscan -- inactive PV "/dev/md3" with UUID
"R15mli-TFs2-214J-YTBh-Hatl-erbL-G7WS4b" is in EXPORTED VG "datavg"
[132.25 GB / 0 free]
pvscan -- inactive PV "/dev/md4" with UUID
"szAa6A-rNM7-FmeU-6DHl-rKmZ-SePL-IURwtg" is in EXPORTED VG "datavg"
[57.12 GB / 50.88 GB free]
pvscan -- WARNING: physical volume "/dev/hdh" belongs to a meta device
pvscan -- inactive PV "/dev/hdf" with UUID
"szAa6A-rNM7-FmeU-6DHl-rKmZ-SePL-IURwtg" is in EXPORTED VG "datavg"
[57.12 GB / 50.88 GB free]
pvscan -- total: 5 [262.97 GB] / in use: 5 [262.97 GB] / in no VG: 0 [0]
"
A vgimport using "md3"(no probs with this raid1) and "md4" fails:
"
[root@athens root]# vgimport datavg /dev/md3 /dev/md4
vgimport -- ERROR "pv_read(): multiple device" reading physical volume
"/dev/md4"
"
Using "md3" and "hdh" also fails:
"
[root@athens root]# vgimport datavg /dev/md3 /dev/hdh
vgimport -- ERROR "pv_read(): multiple device" reading physical volume
"/dev/hdh"
"
It also fails when I try to use "hdf", only the error message is
different:
"
[root@athens root]# vgimport datavg /dev/md3 /dev/hdf
vgimport -- ERROR: wrong number of physical volumes to import volume
group "datavg"
"
So here I am, with a huge VG an tons of data in it but no way to access
the VG. Has anybody out there an idea how I can still access
the data of datavg ?
By the way:
I am using RedHatLinux 9.0 with the lvm-1.0.3-12 binary rpm package
as provided by RedHat.
Bye
In desperation
Lutz Reinegger
PS:
Any comments and suggestions are highly appreciated, even if those
suggestions include the use of hex editors or sacrificing
caffeine to dark and ancient deities.
;-)
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [linux-lvm] Problems with vgimport after software raid initialisation failed.
2003-09-30 15:00 [linux-lvm] Problems with vgimport after software raid initialisation failed SystemError
@ 2003-10-02 4:23 ` Heinz J . Mauelshagen
2003-10-02 5:58 ` SystemError
0 siblings, 1 reply; 5+ messages in thread
From: Heinz J . Mauelshagen @ 2003-10-02 4:23 UTC (permalink / raw)
To: linux-lvm
Lutz,
looks like you hit some strange LVM on top of MD bug :(
In order to get your VG active again (which of course is your highest
priority) and before we analyze the vgscan problem, you want to go for
the follwoing workaround (presumably /etc/lvmconf/datavg.conf is the last
correct archive of the metadata):
# cp /etc/lvmconf/datavg.conf /etc/lvmtab.d/datavg
# echo -ne "datavg\0" >> /etc/lvmtab
# vgchange -ay datavg
Warning: the next vgscan run will remove the above metadata again, so avoid
running it for now by commenting it out in your boot script.
So far about firefighting ;)
For further analysis, please do the following and send the resulting
bzip2'ed tar archive containing your metadat to me in private mail
<mge@sistina.com>:
# for d in md2 md3 md4 hdf hdh
# do
# dd bs=1k count=4k if=/dev/$d of=$d.vgda
# done
# tar cf Lutz_Reinegger.vgda.tar *.vgda
# rm *.vgda
# bzip2 Lutz_Reinegger.vgda.tar
Regards,
Heinz -- The LVM Guy --
On Tue, Sep 30, 2003 at 09:59:19PM +0200, SystemError wrote:
> Hello out there,
>
> after I migrating my precious volume group "datavg" from unmirrored
> disks to linux software raid devices I ran into serios problems.
> (Although I fear the biggest problem here was my own incompetence...)
>
> First I moved the data from the old unmirrored disks away, using pvmove.
> No Problems so far.
>
> At a certain point I had emptied the 2 PVs "/dev/hdh" and "/dev/hdf".
> So I did a vgreduce on them, then created a new raid1
> "/dev/md4" (containing both "hdf" and "hdh") and added it to my
> volume group "datavg" using pvcreate(->"/dev/md4") and vgextend.
> No Problems so far.
>
> Everything looked soooo perfect and so I decided to reboot the system...
>
> At this point things started to go wrong, during the boot sequence
> "/dev/md4" was not automatically activated and suddenly the PV
> "/dev/hdf" showed up in "datavg", "/dev/md4" was gone.
>
> Unfortunately I paniced and ran a vgexport on "datavg", fixed the broken
> initialisation of "/dev/md4", and rebooted again.
> This was a probably a baaad idea.
> Shame upon me.
>
> Now my pvscan looks like this:
> "
> [root@athens root]# pvscan
> pvscan -- reading all physical volumes (this may take a while...)
> pvscan -- ACTIVE PV "/dev/md2" of VG "sysvg" [16 GB / 10 GB free]
> pvscan -- inactive PV "/dev/md3" is in EXPORTED VG "datavg" [132.25 GB /
> 0 free]
> pvscan -- inactive PV "/dev/md4" is associated to unknown VG "datavg"
> (run vgscan)
> pvscan -- WARNING: physical volume "/dev/hdh" belongs to a meta device
> pvscan -- inactive PV "/dev/hdf" is in EXPORTED VG "datavg" [57.12 GB /
> 50.88 GB free]
> pvscan -- total: 5 [262.97 GB] / in use: 5 [262.97 GB] / in no VG: 0 [0]
> "
>
> Or with the -u option:
> "
> [root@athens root]# pvscan -u
> pvscan -- reading all physical volumes (this may take a while...)
> pvscan -- ACTIVE PV "/dev/md2" with UUID
> "g6Au3J-2C4H-Ifjo-iESu-4yp8-aRQv-ozChyW" of VG "sysvg" [16 GB /
> 10 GB free]
> pvscan -- inactive PV "/dev/md3" with UUID
> "R15mli-TFs2-214J-YTBh-Hatl-erbL-G7WS4b" is in EXPORTED VG "datavg"
> [132.25 GB / 0 free]
> pvscan -- inactive PV "/dev/md4" with UUID
> "szAa6A-rNM7-FmeU-6DHl-rKmZ-SePL-IURwtg" is in EXPORTED VG "datavg"
> [57.12 GB / 50.88 GB free]
> pvscan -- WARNING: physical volume "/dev/hdh" belongs to a meta device
> pvscan -- inactive PV "/dev/hdf" with UUID
> "szAa6A-rNM7-FmeU-6DHl-rKmZ-SePL-IURwtg" is in EXPORTED VG "datavg"
> [57.12 GB / 50.88 GB free]
> pvscan -- total: 5 [262.97 GB] / in use: 5 [262.97 GB] / in no VG: 0 [0]
>
> "
>
> A vgimport using "md3"(no probs with this raid1) and "md4" fails:
> "
> [root@athens root]# vgimport datavg /dev/md3 /dev/md4
> vgimport -- ERROR "pv_read(): multiple device" reading physical volume
> "/dev/md4"
> "
>
> Using "md3" and "hdh" also fails:
> "
> [root@athens root]# vgimport datavg /dev/md3 /dev/hdh
> vgimport -- ERROR "pv_read(): multiple device" reading physical volume
> "/dev/hdh"
> "
>
> It also fails when I try to use "hdf", only the error message is
> different:
> "
> [root@athens root]# vgimport datavg /dev/md3 /dev/hdf
> vgimport -- ERROR: wrong number of physical volumes to import volume
> group "datavg"
> "
>
> So here I am, with a huge VG an tons of data in it but no way to access
> the VG. Has anybody out there an idea how I can still access
> the data of datavg ?
>
> By the way:
> I am using RedHatLinux 9.0 with the lvm-1.0.3-12 binary rpm package
> as provided by RedHat.
>
> Bye
> In desperation
> Lutz Reinegger
>
> PS:
> Any comments and suggestions are highly appreciated, even if those
> suggestions include the use of hex editors or sacrificing
> caffeine to dark and ancient deities.
> ;-)
>
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@sistina.com
> http://lists.sistina.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
*** Software bugs are stupid.
Nevertheless it needs not so stupid people to solve them ***
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Heinz Mauelshagen Sistina Software Inc.
Senior Consultant/Developer Am Sonnenhang 11
56242 Marienrachdorf
Germany
Mauelshagen@Sistina.com +49 2626 141200
FAX 924446
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [linux-lvm] Problems with vgimport after software raid initialisation failed.
2003-10-02 4:23 ` Heinz J . Mauelshagen
@ 2003-10-02 5:58 ` SystemError
2003-10-02 9:03 ` Heinz J . Mauelshagen
0 siblings, 1 reply; 5+ messages in thread
From: SystemError @ 2003-10-02 5:58 UTC (permalink / raw)
To: linux-lvm
Heinz,
I did as instructed and now I am able to access my
precious "datavg" again.
Thanks a lot Heinz.
You really saved my day. :-)
I also edited /etc/rc.d/rc.sysinit and commented out all vgscans
(RedHat does 2 of them), and added a hard coded "vgchange -ay sysvg" in
order to bring my other VG with the /usr, /var, etc...
filesystems online during boot time.
Immediately after another reboot I tried to activate "datavg" using
"vgchange -ay datavg"...
...which failed.
Vgchange complained:
"
The PVs /dev/md3 and /dev/md4 are not active.
Please run vgscan.
"
(Can't remember the excact error message)
So I gulped,shrugged and decided that it was time for another
cup of coffee...
When I returned to my desk I ran the vgchange again in order to fetch
the error message for this email...
...et voila:
"
[root@athens root]# vgchange -ay datavg
vgchange -- volume group "datavg" successfully activated
"
So I was only able to activate datavg some minutes after my system was
back up and running ?
I really don't understand this...
OK, I am more than just happy to have my data back online, but WHY ?
Anyone any ideas on this one ?
Another question:
Is it save to extend lvs and filesystems right now in
this strange situation ?
Bye
Lutz Reinegger
PS: The requested tar archvie is on its way.
PPS: Again: THANK YOU !!!
Am Don, 2003-10-02 um 11.11 schrieb Heinz J . Mauelshagen:
>
> Lutz,
>
> looks like you hit some strange LVM on top of MD bug :(
>
> In order to get your VG active again (which of course is your highest
> priority) and before we analyze the vgscan problem, you want to go for
> the follwoing workaround (presumably /etc/lvmconf/datavg.conf is the last
> correct archive of the metadata):
>
> # cp /etc/lvmconf/datavg.conf /etc/lvmtab.d/datavg
> # echo -ne "datavg\0" >> /etc/lvmtab
> # vgchange -ay datavg
>
> Warning: the next vgscan run will remove the above metadata again, so avoid
> running it for now by commenting it out in your boot script.
>
> So far about firefighting ;)
>
>
> For further analysis, please do the following and send the resulting
> bzip2'ed tar archive containing your metadat to me in private mail
> <mge@sistina.com>:
>
> # for d in md2 md3 md4 hdf hdh
> # do
> # dd bs=1k count=4k if=/dev/$d of=$d.vgda
> # done
> # tar cf Lutz_Reinegger.vgda.tar *.vgda
> # rm *.vgda
> # bzip2 Lutz_Reinegger.vgda.tar
>
>
> Regards,
> Heinz -- The LVM Guy --
>
>
>
> On Tue, Sep 30, 2003 at 09:59:19PM +0200, SystemError wrote:
> > Hello out there,
> >
> > after I migrating my precious volume group "datavg" from unmirrored
> > disks to linux software raid devices I ran into serios problems.
> > (Although I fear the biggest problem here was my own incompetence...)
> >
> > First I moved the data from the old unmirrored disks away, using pvmove.
> > No Problems so far.
> >
> > At a certain point I had emptied the 2 PVs "/dev/hdh" and "/dev/hdf".
> > So I did a vgreduce on them, then created a new raid1
> > "/dev/md4" (containing both "hdf" and "hdh") and added it to my
> > volume group "datavg" using pvcreate(->"/dev/md4") and vgextend.
> > No Problems so far.
> >
> > Everything looked soooo perfect and so I decided to reboot the system...
> >
> > At this point things started to go wrong, during the boot sequence
> > "/dev/md4" was not automatically activated and suddenly the PV
> > "/dev/hdf" showed up in "datavg", "/dev/md4" was gone.
> >
> > Unfortunately I paniced and ran a vgexport on "datavg", fixed the broken
> > initialisation of "/dev/md4", and rebooted again.
> > This was a probably a baaad idea.
> > Shame upon me.
> >
> > Now my pvscan looks like this:
> > "
> > [root@athens root]# pvscan
> > pvscan -- reading all physical volumes (this may take a while...)
> > pvscan -- ACTIVE PV "/dev/md2" of VG "sysvg" [16 GB / 10 GB free]
> > pvscan -- inactive PV "/dev/md3" is in EXPORTED VG "datavg" [132.25 GB /
> > 0 free]
> > pvscan -- inactive PV "/dev/md4" is associated to unknown VG "datavg"
> > (run vgscan)
> > pvscan -- WARNING: physical volume "/dev/hdh" belongs to a meta device
> > pvscan -- inactive PV "/dev/hdf" is in EXPORTED VG "datavg" [57.12 GB /
> > 50.88 GB free]
> > pvscan -- total: 5 [262.97 GB] / in use: 5 [262.97 GB] / in no VG: 0 [0]
> > "
> >
> > Or with the -u option:
> > "
> > [root@athens root]# pvscan -u
> > pvscan -- reading all physical volumes (this may take a while...)
> > pvscan -- ACTIVE PV "/dev/md2" with UUID
> > "g6Au3J-2C4H-Ifjo-iESu-4yp8-aRQv-ozChyW" of VG "sysvg" [16 GB /
> > 10 GB free]
> > pvscan -- inactive PV "/dev/md3" with UUID
> > "R15mli-TFs2-214J-YTBh-Hatl-erbL-G7WS4b" is in EXPORTED VG "datavg"
> > [132.25 GB / 0 free]
> > pvscan -- inactive PV "/dev/md4" with UUID
> > "szAa6A-rNM7-FmeU-6DHl-rKmZ-SePL-IURwtg" is in EXPORTED VG "datavg"
> > [57.12 GB / 50.88 GB free]
> > pvscan -- WARNING: physical volume "/dev/hdh" belongs to a meta device
> > pvscan -- inactive PV "/dev/hdf" with UUID
> > "szAa6A-rNM7-FmeU-6DHl-rKmZ-SePL-IURwtg" is in EXPORTED VG "datavg"
> > [57.12 GB / 50.88 GB free]
> > pvscan -- total: 5 [262.97 GB] / in use: 5 [262.97 GB] / in no VG: 0 [0]
> >
> > "
> >
> > A vgimport using "md3"(no probs with this raid1) and "md4" fails:
> > "
> > [root@athens root]# vgimport datavg /dev/md3 /dev/md4
> > vgimport -- ERROR "pv_read(): multiple device" reading physical volume
> > "/dev/md4"
> > "
> >
> > Using "md3" and "hdh" also fails:
> > "
> > [root@athens root]# vgimport datavg /dev/md3 /dev/hdh
> > vgimport -- ERROR "pv_read(): multiple device" reading physical volume
> > "/dev/hdh"
> > "
> >
> > It also fails when I try to use "hdf", only the error message is
> > different:
> > "
> > [root@athens root]# vgimport datavg /dev/md3 /dev/hdf
> > vgimport -- ERROR: wrong number of physical volumes to import volume
> > group "datavg"
> > "
> >
> > So here I am, with a huge VG an tons of data in it but no way to access
> > the VG. Has anybody out there an idea how I can still access
> > the data of datavg ?
> >
> > By the way:
> > I am using RedHatLinux 9.0 with the lvm-1.0.3-12 binary rpm package
> > as provided by RedHat.
> >
> > Bye
> > In desperation
> > Lutz Reinegger
> >
> > PS:
> > Any comments and suggestions are highly appreciated, even if those
> > suggestions include the use of hex editors or sacrificing
> > caffeine to dark and ancient deities.
> > ;-)
> >
> >
> >
> > _______________________________________________
> > linux-lvm mailing list
> > linux-lvm@sistina.com
> > http://lists.sistina.com/mailman/listinfo/linux-lvm
> > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
> *** Software bugs are stupid.
> Nevertheless it needs not so stupid people to solve them ***
>
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>
> Heinz Mauelshagen Sistina Software Inc.
> Senior Consultant/Developer Am Sonnenhang 11
> 56242 Marienrachdorf
> Germany
> Mauelshagen@Sistina.com +49 2626 141200
> FAX 924446
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@sistina.com
> http://lists.sistina.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [linux-lvm] Problems with vgimport after software raid initialisation failed.
2003-10-02 5:58 ` SystemError
@ 2003-10-02 9:03 ` Heinz J . Mauelshagen
2003-10-09 15:26 ` SystemError
0 siblings, 1 reply; 5+ messages in thread
From: Heinz J . Mauelshagen @ 2003-10-02 9:03 UTC (permalink / raw)
To: linux-lvm
On Thu, Oct 02, 2003 at 12:57:18PM +0200, SystemError wrote:
> Heinz,
>
> I did as instructed and now I am able to access my
> precious "datavg" again.
> Thanks a lot Heinz.
> You really saved my day. :-)
Great. Sounds like filesystem backup time now ;)
>
> I also edited /etc/rc.d/rc.sysinit and commented out all vgscans
> (RedHat does 2 of them), and added a hard coded "vgchange -ay sysvg" in
> order to bring my other VG with the /usr, /var, etc...
> filesystems online during boot time.
>
> Immediately after another reboot I tried to activate "datavg" using
> "vgchange -ay datavg"...
> ...which failed.
> Vgchange complained:
> "
> The PVs /dev/md3 and /dev/md4 are not active.
> Please run vgscan.
> "
> (Can't remember the excact error message)
>
> So I gulped,shrugged and decided that it was time for another
> cup of coffee...
> When I returned to my desk I ran the vgchange again in order to fetch
> the error message for this email...
> ...et voila:
> "
> [root@athens root]# vgchange -ay datavg
> vgchange -- volume group "datavg" successfully activated
> "
>
> So I was only able to activate datavg some minutes after my system was
> back up and running ?
> I really don't understand this...
> OK, I am more than just happy to have my data back online, but WHY ?
> Anyone any ideas on this one ?
Sounds like a race to me, where the mds aren't active when vgscan runs.
>
> Another question:
> Is it save to extend lvs and filesystems right now in
> this strange situation ?
No, the activation/metadata problems need to get sorted out first.
>
> Bye
> Lutz Reinegger
>
> PS: The requested tar archvie is on its way.
>
> PPS: Again: THANK YOU !!!
>
> Am Don, 2003-10-02 um 11.11 schrieb Heinz J . Mauelshagen:
> >
> > Lutz,
> >
> > looks like you hit some strange LVM on top of MD bug :(
> >
> > In order to get your VG active again (which of course is your highest
> > priority) and before we analyze the vgscan problem, you want to go for
> > the follwoing workaround (presumably /etc/lvmconf/datavg.conf is the last
> > correct archive of the metadata):
> >
> > # cp /etc/lvmconf/datavg.conf /etc/lvmtab.d/datavg
> > # echo -ne "datavg\0" >> /etc/lvmtab
> > # vgchange -ay datavg
> >
> > Warning: the next vgscan run will remove the above metadata again, so avoid
> > running it for now by commenting it out in your boot script.
> >
> > So far about firefighting ;)
> >
> >
> > For further analysis, please do the following and send the resulting
> > bzip2'ed tar archive containing your metadat to me in private mail
> > <mge@sistina.com>:
> >
> > # for d in md2 md3 md4 hdf hdh
> > # do
> > # dd bs=1k count=4k if=/dev/$d of=$d.vgda
> > # done
> > # tar cf Lutz_Reinegger.vgda.tar *.vgda
> > # rm *.vgda
> > # bzip2 Lutz_Reinegger.vgda.tar
> >
> >
> > Regards,
> > Heinz -- The LVM Guy --
> >
> >
> >
> > On Tue, Sep 30, 2003 at 09:59:19PM +0200, SystemError wrote:
> > > Hello out there,
> > >
> > > after I migrating my precious volume group "datavg" from unmirrored
> > > disks to linux software raid devices I ran into serios problems.
> > > (Although I fear the biggest problem here was my own incompetence...)
> > >
> > > First I moved the data from the old unmirrored disks away, using pvmove.
> > > No Problems so far.
> > >
> > > At a certain point I had emptied the 2 PVs "/dev/hdh" and "/dev/hdf".
> > > So I did a vgreduce on them, then created a new raid1
> > > "/dev/md4" (containing both "hdf" and "hdh") and added it to my
> > > volume group "datavg" using pvcreate(->"/dev/md4") and vgextend.
> > > No Problems so far.
> > >
> > > Everything looked soooo perfect and so I decided to reboot the system...
> > >
> > > At this point things started to go wrong, during the boot sequence
> > > "/dev/md4" was not automatically activated and suddenly the PV
> > > "/dev/hdf" showed up in "datavg", "/dev/md4" was gone.
> > >
> > > Unfortunately I paniced and ran a vgexport on "datavg", fixed the broken
> > > initialisation of "/dev/md4", and rebooted again.
> > > This was a probably a baaad idea.
> > > Shame upon me.
> > >
> > > Now my pvscan looks like this:
> > > "
> > > [root@athens root]# pvscan
> > > pvscan -- reading all physical volumes (this may take a while...)
> > > pvscan -- ACTIVE PV "/dev/md2" of VG "sysvg" [16 GB / 10 GB free]
> > > pvscan -- inactive PV "/dev/md3" is in EXPORTED VG "datavg" [132.25 GB /
> > > 0 free]
> > > pvscan -- inactive PV "/dev/md4" is associated to unknown VG "datavg"
> > > (run vgscan)
> > > pvscan -- WARNING: physical volume "/dev/hdh" belongs to a meta device
> > > pvscan -- inactive PV "/dev/hdf" is in EXPORTED VG "datavg" [57.12 GB /
> > > 50.88 GB free]
> > > pvscan -- total: 5 [262.97 GB] / in use: 5 [262.97 GB] / in no VG: 0 [0]
> > > "
> > >
> > > Or with the -u option:
> > > "
> > > [root@athens root]# pvscan -u
> > > pvscan -- reading all physical volumes (this may take a while...)
> > > pvscan -- ACTIVE PV "/dev/md2" with UUID
> > > "g6Au3J-2C4H-Ifjo-iESu-4yp8-aRQv-ozChyW" of VG "sysvg" [16 GB /
> > > 10 GB free]
> > > pvscan -- inactive PV "/dev/md3" with UUID
> > > "R15mli-TFs2-214J-YTBh-Hatl-erbL-G7WS4b" is in EXPORTED VG "datavg"
> > > [132.25 GB / 0 free]
> > > pvscan -- inactive PV "/dev/md4" with UUID
> > > "szAa6A-rNM7-FmeU-6DHl-rKmZ-SePL-IURwtg" is in EXPORTED VG "datavg"
> > > [57.12 GB / 50.88 GB free]
> > > pvscan -- WARNING: physical volume "/dev/hdh" belongs to a meta device
> > > pvscan -- inactive PV "/dev/hdf" with UUID
> > > "szAa6A-rNM7-FmeU-6DHl-rKmZ-SePL-IURwtg" is in EXPORTED VG "datavg"
> > > [57.12 GB / 50.88 GB free]
> > > pvscan -- total: 5 [262.97 GB] / in use: 5 [262.97 GB] / in no VG: 0 [0]
> > >
> > > "
> > >
> > > A vgimport using "md3"(no probs with this raid1) and "md4" fails:
> > > "
> > > [root@athens root]# vgimport datavg /dev/md3 /dev/md4
> > > vgimport -- ERROR "pv_read(): multiple device" reading physical volume
> > > "/dev/md4"
> > > "
> > >
> > > Using "md3" and "hdh" also fails:
> > > "
> > > [root@athens root]# vgimport datavg /dev/md3 /dev/hdh
> > > vgimport -- ERROR "pv_read(): multiple device" reading physical volume
> > > "/dev/hdh"
> > > "
> > >
> > > It also fails when I try to use "hdf", only the error message is
> > > different:
> > > "
> > > [root@athens root]# vgimport datavg /dev/md3 /dev/hdf
> > > vgimport -- ERROR: wrong number of physical volumes to import volume
> > > group "datavg"
> > > "
> > >
> > > So here I am, with a huge VG an tons of data in it but no way to access
> > > the VG. Has anybody out there an idea how I can still access
> > > the data of datavg ?
> > >
> > > By the way:
> > > I am using RedHatLinux 9.0 with the lvm-1.0.3-12 binary rpm package
> > > as provided by RedHat.
> > >
> > > Bye
> > > In desperation
> > > Lutz Reinegger
> > >
> > > PS:
> > > Any comments and suggestions are highly appreciated, even if those
> > > suggestions include the use of hex editors or sacrificing
> > > caffeine to dark and ancient deities.
> > > ;-)
> > >
> > >
> > >
> > > _______________________________________________
> > > linux-lvm mailing list
> > > linux-lvm@sistina.com
> > > http://lists.sistina.com/mailman/listinfo/linux-lvm
> > > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> >
> > *** Software bugs are stupid.
> > Nevertheless it needs not so stupid people to solve them ***
> >
> > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> >
> > Heinz Mauelshagen Sistina Software Inc.
> > Senior Consultant/Developer Am Sonnenhang 11
> > 56242 Marienrachdorf
> > Germany
> > Mauelshagen@Sistina.com +49 2626 141200
> > FAX 924446
> > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> >
> > _______________________________________________
> > linux-lvm mailing list
> > linux-lvm@sistina.com
> > http://lists.sistina.com/mailman/listinfo/linux-lvm
> > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> >
>
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@sistina.com
> http://lists.sistina.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
--
Regards,
Heinz -- The LVM Guy --
*** Software bugs are stupid.
Nevertheless it needs not so stupid people to solve them ***
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Heinz Mauelshagen Sistina Software Inc.
Senior Consultant/Developer Am Sonnenhang 11
56242 Marienrachdorf
Germany
Mauelshagen@Sistina.com +49 2626 141200
FAX 924446
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [linux-lvm] Problems with vgimport after software raid initialisation failed.
2003-10-02 9:03 ` Heinz J . Mauelshagen
@ 2003-10-09 15:26 ` SystemError
0 siblings, 0 replies; 5+ messages in thread
From: SystemError @ 2003-10-09 15:26 UTC (permalink / raw)
To: linux-lvm
Hey there,
thanks to the help of Heinz I was able to get my VG online again...
Unfortunately we had a power outage here in Karlsruhe on 02.10.2003 and
so it took a little bit longer than expected.
(Of course the outage came right when it hurt the most)
Now I am able to varryon/varryoff my VG as often I as I like.
(during the chaos after the outage I renamed "datavg" -> "vaultvg")
My "pvscan" looks now like this:
"
[root@athens root]# pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- ACTIVE PV "/dev/md2" of VG "sysvg" [16 GB / 10 GB free]
pvscan -- ACTIVE PV "/dev/md3" of VG "vaultvg" [132.25 GB / 0 free]
pvscan -- ACTIVE PV "/dev/md4" of VG "vaultvg" [57.12 GB / 50.88 GB
free]
pvscan -- WARNING: physical volume "/dev/hdh" belongs to a meta device
pvscan -- WARNING: physical volume "/dev/hdf" belongs to a meta device
pvscan -- total: 5 [205.70 GB] / in use: 5 [205.70 GB] / in no VG: 0 [0]
"
And my here goes my "vgscan":
"
[root@athens root]# vgscan
vgscan -- reading all physical volumes (this may take a while...)
vgscan -- found active volume group "sysvg"
vgscan -- found active volume group "vaultvg"
vgscan -- ERROR "vg_read_with_pv_and_lv(): current PV" can't get data of
volume group "datavg" from physical volume(s)
vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created
vgscan -- WARNING: This program does not do a VGDA backup of your volume
groups
"
Here are my remaining questions:
a.)
Why is "pvscan" complaining about the raid members "hdh" and "hdf" ?
It does not complain about my other raid devices.
Any ideas ?
b.)
And why is "vgscan" complaining about the VG "datavg",
even if this VG does not exist anymore ?
---> Would perhaps a vgexport/vgimport be a solution ?
>>> Great. Sounds like filesystem backup time now ;)
I would like very much to do so, Heinz.
:-)
But unfortunately that is not an option in
this setup...
:-(
...better don' ask...
;-)
Still grateful for any hints
Lutz Reinegger
Am Don, 2003-10-02 um 15.51 schrieb Heinz J . Mauelshagen:
> On Thu, Oct 02, 2003 at 12:57:18PM +0200, SystemError wrote:
> > Heinz,
> >
> > I did as instructed and now I am able to access my
> > precious "datavg" again.
> > Thanks a lot Heinz.
> > You really saved my day. :-)
>
> Great. Sounds like filesystem backup time now ;)
>
> >
> > I also edited /etc/rc.d/rc.sysinit and commented out all vgscans
> > (RedHat does 2 of them), and added a hard coded "vgchange -ay sysvg" in
> > order to bring my other VG with the /usr, /var, etc...
> > filesystems online during boot time.
> >
> > Immediately after another reboot I tried to activate "datavg" using
> > "vgchange -ay datavg"...
> > ...which failed.
> > Vgchange complained:
> > "
> > The PVs /dev/md3 and /dev/md4 are not active.
> > Please run vgscan.
> > "
> > (Can't remember the excact error message)
> >
> > So I gulped,shrugged and decided that it was time for another
> > cup of coffee...
> > When I returned to my desk I ran the vgchange again in order to fetch
> > the error message for this email...
> > ...et voila:
> > "
> > [root@athens root]# vgchange -ay datavg
> > vgchange -- volume group "datavg" successfully activated
> > "
> >
> > So I was only able to activate datavg some minutes after my system was
> > back up and running ?
> > I really don't understand this...
> > OK, I am more than just happy to have my data back online, but WHY ?
> > Anyone any ideas on this one ?
>
> Sounds like a race to me, where the mds aren't active when vgscan runs.
>
> >
> > Another question:
> > Is it save to extend lvs and filesystems right now in
> > this strange situation ?
>
> No, the activation/metadata problems need to get sorted out first.
>
> >
> > Bye
> > Lutz Reinegger
> >
> > PS: The requested tar archvie is on its way.
> >
> > PPS: Again: THANK YOU !!!
> >
> > Am Don, 2003-10-02 um 11.11 schrieb Heinz J . Mauelshagen:
> > >
> > > Lutz,
> > >
> > > looks like you hit some strange LVM on top of MD bug :(
> > >
> > > In order to get your VG active again (which of course is your highest
> > > priority) and before we analyze the vgscan problem, you want to go for
> > > the follwoing workaround (presumably /etc/lvmconf/datavg.conf is the last
> > > correct archive of the metadata):
> > >
> > > # cp /etc/lvmconf/datavg.conf /etc/lvmtab.d/datavg
> > > # echo -ne "datavg\0" >> /etc/lvmtab
> > > # vgchange -ay datavg
> > >
> > > Warning: the next vgscan run will remove the above metadata again, so avoid
> > > running it for now by commenting it out in your boot script.
> > >
> > > So far about firefighting ;)
> > >
> > >
> > > For further analysis, please do the following and send the resulting
> > > bzip2'ed tar archive containing your metadat to me in private mail
> > > <mge@sistina.com>:
> > >
> > > # for d in md2 md3 md4 hdf hdh
> > > # do
> > > # dd bs=1k count=4k if=/dev/$d of=$d.vgda
> > > # done
> > > # tar cf Lutz_Reinegger.vgda.tar *.vgda
> > > # rm *.vgda
> > > # bzip2 Lutz_Reinegger.vgda.tar
> > >
> > >
> > > Regards,
> > > Heinz -- The LVM Guy --
> > >
> > >
> > >
> > > On Tue, Sep 30, 2003 at 09:59:19PM +0200, SystemError wrote:
> > > > Hello out there,
> > > >
> > > > after I migrating my precious volume group "datavg" from unmirrored
> > > > disks to linux software raid devices I ran into serios problems.
> > > > (Although I fear the biggest problem here was my own incompetence...)
> > > >
> > > > First I moved the data from the old unmirrored disks away, using pvmove.
> > > > No Problems so far.
> > > >
> > > > At a certain point I had emptied the 2 PVs "/dev/hdh" and "/dev/hdf".
> > > > So I did a vgreduce on them, then created a new raid1
> > > > "/dev/md4" (containing both "hdf" and "hdh") and added it to my
> > > > volume group "datavg" using pvcreate(->"/dev/md4") and vgextend.
> > > > No Problems so far.
> > > >
> > > > Everything looked soooo perfect and so I decided to reboot the system...
> > > >
> > > > At this point things started to go wrong, during the boot sequence
> > > > "/dev/md4" was not automatically activated and suddenly the PV
> > > > "/dev/hdf" showed up in "datavg", "/dev/md4" was gone.
> > > >
> > > > Unfortunately I paniced and ran a vgexport on "datavg", fixed the broken
> > > > initialisation of "/dev/md4", and rebooted again.
> > > > This was a probably a baaad idea.
> > > > Shame upon me.
> > > >
> > > > Now my pvscan looks like this:
> > > > "
> > > > [root@athens root]# pvscan
> > > > pvscan -- reading all physical volumes (this may take a while...)
> > > > pvscan -- ACTIVE PV "/dev/md2" of VG "sysvg" [16 GB / 10 GB free]
> > > > pvscan -- inactive PV "/dev/md3" is in EXPORTED VG "datavg" [132.25 GB /
> > > > 0 free]
> > > > pvscan -- inactive PV "/dev/md4" is associated to unknown VG "datavg"
> > > > (run vgscan)
> > > > pvscan -- WARNING: physical volume "/dev/hdh" belongs to a meta device
> > > > pvscan -- inactive PV "/dev/hdf" is in EXPORTED VG "datavg" [57.12 GB /
> > > > 50.88 GB free]
> > > > pvscan -- total: 5 [262.97 GB] / in use: 5 [262.97 GB] / in no VG: 0 [0]
> > > > "
> > > >
> > > > Or with the -u option:
> > > > "
> > > > [root@athens root]# pvscan -u
> > > > pvscan -- reading all physical volumes (this may take a while...)
> > > > pvscan -- ACTIVE PV "/dev/md2" with UUID
> > > > "g6Au3J-2C4H-Ifjo-iESu-4yp8-aRQv-ozChyW" of VG "sysvg" [16 GB /
> > > > 10 GB free]
> > > > pvscan -- inactive PV "/dev/md3" with UUID
> > > > "R15mli-TFs2-214J-YTBh-Hatl-erbL-G7WS4b" is in EXPORTED VG "datavg"
> > > > [132.25 GB / 0 free]
> > > > pvscan -- inactive PV "/dev/md4" with UUID
> > > > "szAa6A-rNM7-FmeU-6DHl-rKmZ-SePL-IURwtg" is in EXPORTED VG "datavg"
> > > > [57.12 GB / 50.88 GB free]
> > > > pvscan -- WARNING: physical volume "/dev/hdh" belongs to a meta device
> > > > pvscan -- inactive PV "/dev/hdf" with UUID
> > > > "szAa6A-rNM7-FmeU-6DHl-rKmZ-SePL-IURwtg" is in EXPORTED VG "datavg"
> > > > [57.12 GB / 50.88 GB free]
> > > > pvscan -- total: 5 [262.97 GB] / in use: 5 [262.97 GB] / in no VG: 0 [0]
> > > >
> > > > "
> > > >
> > > > A vgimport using "md3"(no probs with this raid1) and "md4" fails:
> > > > "
> > > > [root@athens root]# vgimport datavg /dev/md3 /dev/md4
> > > > vgimport -- ERROR "pv_read(): multiple device" reading physical volume
> > > > "/dev/md4"
> > > > "
> > > >
> > > > Using "md3" and "hdh" also fails:
> > > > "
> > > > [root@athens root]# vgimport datavg /dev/md3 /dev/hdh
> > > > vgimport -- ERROR "pv_read(): multiple device" reading physical volume
> > > > "/dev/hdh"
> > > > "
> > > >
> > > > It also fails when I try to use "hdf", only the error message is
> > > > different:
> > > > "
> > > > [root@athens root]# vgimport datavg /dev/md3 /dev/hdf
> > > > vgimport -- ERROR: wrong number of physical volumes to import volume
> > > > group "datavg"
> > > > "
> > > >
> > > > So here I am, with a huge VG an tons of data in it but no way to access
> > > > the VG. Has anybody out there an idea how I can still access
> > > > the data of datavg ?
> > > >
> > > > By the way:
> > > > I am using RedHatLinux 9.0 with the lvm-1.0.3-12 binary rpm package
> > > > as provided by RedHat.
> > > >
> > > > Bye
> > > > In desperation
> > > > Lutz Reinegger
> > > >
> > > > PS:
> > > > Any comments and suggestions are highly appreciated, even if those
> > > > suggestions include the use of hex editors or sacrificing
> > > > caffeine to dark and ancient deities.
> > > > ;-)
> > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > linux-lvm mailing list
> > > > linux-lvm@sistina.com
> > > > http://lists.sistina.com/mailman/listinfo/linux-lvm
> > > > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> > >
> > > *** Software bugs are stupid.
> > > Nevertheless it needs not so stupid people to solve them ***
> > >
> > > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> > >
> > > Heinz Mauelshagen Sistina Software Inc.
> > > Senior Consultant/Developer Am Sonnenhang 11
> > > 56242 Marienrachdorf
> > > Germany
> > > Mauelshagen@Sistina.com +49 2626 141200
> > > FAX 924446
> > > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> > >
> > > _______________________________________________
> > > linux-lvm mailing list
> > > linux-lvm@sistina.com
> > > http://lists.sistina.com/mailman/listinfo/linux-lvm
> > > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> > >
> >
> >
> >
> > _______________________________________________
> > linux-lvm mailing list
> > linux-lvm@sistina.com
> > http://lists.sistina.com/mailman/listinfo/linux-lvm
> > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
> --
>
> Regards,
> Heinz -- The LVM Guy --
>
> *** Software bugs are stupid.
> Nevertheless it needs not so stupid people to solve them ***
>
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>
> Heinz Mauelshagen Sistina Software Inc.
> Senior Consultant/Developer Am Sonnenhang 11
> 56242 Marienrachdorf
> Germany
> Mauelshagen@Sistina.com +49 2626 141200
> FAX 924446
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@sistina.com
> http://lists.sistina.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2003-10-09 15:26 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-09-30 15:00 [linux-lvm] Problems with vgimport after software raid initialisation failed SystemError
2003-10-02 4:23 ` Heinz J . Mauelshagen
2003-10-02 5:58 ` SystemError
2003-10-02 9:03 ` Heinz J . Mauelshagen
2003-10-09 15:26 ` SystemError
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.