linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] recover volume group & locical volumes from PV?
@ 2019-05-11 22:52 "Rainer Fügenstein"
  2019-05-12  0:09 ` Roger Heflin
  2019-05-13  8:37 ` Zdenek Kabelac
  0 siblings, 2 replies; 5+ messages in thread
From: "Rainer Fügenstein" @ 2019-05-11 22:52 UTC (permalink / raw)
  To: linux-lvm

hi,

I am (was) using Fedora 28 installed in several LVs on /dev/sda5 (= PV),
where sda is a "big" SSD.

by accident, I attached (via SATA hot swap bay) an old harddisk
(/dev/sdc1), which was used about 2 months temporarily to move the volume
group / logical volumes from the "old" SSD to the "new" SSD (pvadd,
pvmove, ...)

this combination of old PV and new PV messed up the filesystems. when I
noticed the mistake, I did a shutdown and physically removed /dev/sdc.
this also removed VG and LVs on /dev/sda5, causing the system crash on
boot.

the layout was something like this:

/dev/sda3 ==> /boot
/dev/fedora_sh64/lv_home
/dev/fedora_sh64/lv_root
/dev/fedora_sh64/lv_var
...

[root@localhost-live ~]# pvs
  PV         VG Fmt  Attr PSize   PFree
  /dev/sda5     lvm2 ---  <47.30g <47.30g

[root@localhost-live ~]# pvdisplay
  "/dev/sda5" is a new physical volume of "<47.30 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sda5
  VG Name
  PV Size               <47.30 GiB
  Allocatable           NO
  PE Size               0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               EOi5Ln-W26D-SER2-tLke-xNMP-Prgq-aUfLPz

(please note "NEW Physical Volume" ?!?!)

[root@localhost-live ~]# vgdisplay
[root@localhost-live ~]# lvdisplay

[root@localhost-live ~]# pvscan
  PV /dev/sda5                      lvm2 [<47.30 GiB]
  Total: 1 [<47.30 GiB] / in use: 0 [0   ] / in no VG: 1 [<47.30 GiB]

[root@localhost-live ~]# pvck /dev/sda5
  Found label on /dev/sda5, sector 1, type=LVM2 001
  Found text metadata area: offset=4096, size=1044480

after re-adding the old /dev/sdc1 disk, VG and LVs show up, filesystems a
bit damaged, but readable. content is about two months old.

[root@localhost-live ~]# pvs
  PV         VG              Fmt  Attr PSize    PFree
  /dev/sda5                  lvm2 ---   <47.30g <47.30g
  /dev/sdc1  fedora_sh64     lvm2 a--  <298.09g 273.30g

is there any chance to get VG and LVs back?

thanks in advance.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] recover volume group & locical volumes from PV?
  2019-05-11 22:52 [linux-lvm] recover volume group & locical volumes from PV? "Rainer Fügenstein"
@ 2019-05-12  0:09 ` Roger Heflin
  2019-05-13  8:37 ` Zdenek Kabelac
  1 sibling, 0 replies; 5+ messages in thread
From: Roger Heflin @ 2019-05-12  0:09 UTC (permalink / raw)
  To: LVM general discussion and development

check to see if there are LVM copies in
/etc/lvm/{archive,backup,cache}. if you find one that looks right then
you can use vgcfgrestore to put it back onto the correct pv.  The
copies should document the last command ran before the copy was made,
you need to make sure that no critical commands were run after the
copy was made for it to be the right copy.  The file will have the
full layout of the vg/pv/lv in it, and contains all of information
that made the pv/vg/lv.

On Sat, May 11, 2019 at 5:49 PM "Rainer Fügenstein" <rfu@oudeis.org> wrote:
>
> hi,
>
> I am (was) using Fedora 28 installed in several LVs on /dev/sda5 (= PV),
> where sda is a "big" SSD.
>
> by accident, I attached (via SATA hot swap bay) an old harddisk
> (/dev/sdc1), which was used about 2 months temporarily to move the volume
> group / logical volumes from the "old" SSD to the "new" SSD (pvadd,
> pvmove, ...)
>
> this combination of old PV and new PV messed up the filesystems. when I
> noticed the mistake, I did a shutdown and physically removed /dev/sdc.
> this also removed VG and LVs on /dev/sda5, causing the system crash on
> boot.
>
> the layout was something like this:
>
> /dev/sda3 ==> /boot
> /dev/fedora_sh64/lv_home
> /dev/fedora_sh64/lv_root
> /dev/fedora_sh64/lv_var
> ...
>
> [root@localhost-live ~]# pvs
>   PV         VG Fmt  Attr PSize   PFree
>   /dev/sda5     lvm2 ---  <47.30g <47.30g
>
> [root@localhost-live ~]# pvdisplay
>   "/dev/sda5" is a new physical volume of "<47.30 GiB"
>   --- NEW Physical volume ---
>   PV Name               /dev/sda5
>   VG Name
>   PV Size               <47.30 GiB
>   Allocatable           NO
>   PE Size               0
>   Total PE              0
>   Free PE               0
>   Allocated PE          0
>   PV UUID               EOi5Ln-W26D-SER2-tLke-xNMP-Prgq-aUfLPz
>
> (please note "NEW Physical Volume" ?!?!)
>
> [root@localhost-live ~]# vgdisplay
> [root@localhost-live ~]# lvdisplay
>
> [root@localhost-live ~]# pvscan
>   PV /dev/sda5                      lvm2 [<47.30 GiB]
>   Total: 1 [<47.30 GiB] / in use: 0 [0   ] / in no VG: 1 [<47.30 GiB]
>
> [root@localhost-live ~]# pvck /dev/sda5
>   Found label on /dev/sda5, sector 1, type=LVM2 001
>   Found text metadata area: offset=4096, size=1044480
>
> after re-adding the old /dev/sdc1 disk, VG and LVs show up, filesystems a
> bit damaged, but readable. content is about two months old.
>
> [root@localhost-live ~]# pvs
>   PV         VG              Fmt  Attr PSize    PFree
>   /dev/sda5                  lvm2 ---   <47.30g <47.30g
>   /dev/sdc1  fedora_sh64     lvm2 a--  <298.09g 273.30g
>
> is there any chance to get VG and LVs back?
>
> thanks in advance.
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] recover volume group & locical volumes from PV?
  2019-05-11 22:52 [linux-lvm] recover volume group & locical volumes from PV? "Rainer Fügenstein"
  2019-05-12  0:09 ` Roger Heflin
@ 2019-05-13  8:37 ` Zdenek Kabelac
  2019-05-13 15:17   ` Rainer Fügenstein
  2019-05-15 16:55   ` Rainer Fügenstein
  1 sibling, 2 replies; 5+ messages in thread
From: Zdenek Kabelac @ 2019-05-13  8:37 UTC (permalink / raw)
  To: LVM general discussion and development, Rainer Fügenstein

Dne 12. 05. 19 v 0:52 "Rainer F�genstein" napsal(a):
> hi,
> 
> I am (was) using Fedora 28 installed in several LVs on /dev/sda5 (= PV),
> where sda is a "big" SSD.
> 
> by accident, I attached (via SATA hot swap bay) an old harddisk
> (/dev/sdc1), which was used about 2 months temporarily to move the volume
> group / logical volumes from the "old" SSD to the "new" SSD (pvadd,
> pvmove, ...)
> 

Hi

I don't understand how this could have happened by accident.
lvm2 provides strong detection of duplicated devices.
It also detects older metadata.

So you would have to put in 'exact' but just old 'copy' of your device
and at the same time drop out the original one -  is that what you've made ??

> this combination of old PV and new PV messed up the filesystems. when I
> noticed the mistake, I did a shutdown and physically removed /dev/sdc.
> this also removed VG and LVs on /dev/sda5, causing the system crash on
> boot.
> 
> 
> [root@localhost-live ~]# pvs
>    PV         VG              Fmt  Attr PSize    PFree
>    /dev/sda5                  lvm2 ---   <47.30g <47.30g
>    /dev/sdc1  fedora_sh64     lvm2 a--  <298.09g 273.30g
> 
> is there any chance to get VG and LVs back?


VG & LV are just 'terms' - there is no 'physical-content' behind them - so if 
you've already used your filesystem and modified it's bits on a device - the 
physical content of your storage is simply overwritten and there is no way to 
recover it's content by just fixing lvm2 metadata.

lvm2 provides command:  'vgcfgrestore' - which can restore your older metadata 
content (description which devices are used and where the individual LVs use 
their extents - basically mapping of blocks) - typically in your 
/etc/lvm/archive directory - and in the worst case - you can obtain older 
metadata by scanning 1st. MiB of your physical drive - data are there in ascii 
format in ring buffer so for your small set of LVs you likely should have 
there full history.

When you put back your original 'drive set' - and you restore your lvm2 
metadata to the point before you started to play with bad drive - then your 
only hope is properly working 'fsck' - but there is nothing how lvm2 can help 
with this.

Regards

Zdenek

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] recover volume group & locical volumes from PV?
  2019-05-13  8:37 ` Zdenek Kabelac
@ 2019-05-13 15:17   ` Rainer Fügenstein
  2019-05-15 16:55   ` Rainer Fügenstein
  1 sibling, 0 replies; 5+ messages in thread
From: Rainer Fügenstein @ 2019-05-13 15:17 UTC (permalink / raw)
  To: linux-lvm

hi,

> I don't understand how this could have happened by accident.
> lvm2 provides strong detection of duplicated devices.
> It also detects older metadata.

that's what also puzzles me, since I've never had any issues in this 
regard before.

> So you would have to put in 'exact' but just old 'copy' of your device
> and at the same time drop out the original one -� is that what you've 
> made ??

no. this is what happened: (from memory)

/dev/sda5 = only PV containing vg_sh64

about 2 months ago:

pvcreate /dev/sdc1 (temp. HD)
vgextend vg_sh64 /dev/sdc1
pvmove vg_sh64 /dev/sda5     (move VG from old to temp HD)
vgreduce vg_sh64 /dev/sda5

- replace "old" sda with a new, bigger one
- create same partition layout

pvcreate /dev/sda5
vgextend vg_sh64 /dev/sda5
pvmove vg_sh64 /dev/sdc1   (move everything back to new HD)
vgreduce vg_sh64 /dev/sdc1

what I *DID FORGET* then was to issue "pvremove /dev/sdc1"

a few days ago, I inserted the old temp HD sdc into the SATA hot swap 
bay and powered it on. shortly after that the system behaved strangely. 
issued "dmesg" and saw a lot of error messages regarding SATA. 
unfortunately, memory is blurry from then on since I entered panic mode.

Did a graceful shutdown though (closed all programms, executed 
"poweroff"). from this moment on, VG on /dev/sda5 was gone.

can't rule out that powering on sdc in the hot plug bay caused havoc on 
the SATA bus, just when lvm was doing some housekeeping.

> VG & LV are just 'terms' - there is no 'physical-conte
> lvm2 provides command:� 'vgcfgrestore' - which can restore your older 
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

thanks for the information (also to Roger). I managed to recover a 
promising LVM text file from lost+found on the temp disk. will try to 
restore it as soon as I'm back home again.

thanks - and wish me luck ;-)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] recover volume group & locical volumes from PV?
  2019-05-13  8:37 ` Zdenek Kabelac
  2019-05-13 15:17   ` Rainer Fügenstein
@ 2019-05-15 16:55   ` Rainer Fügenstein
  1 sibling, 0 replies; 5+ messages in thread
From: Rainer Fügenstein @ 2019-05-15 16:55 UTC (permalink / raw)
  To: linux-lvm

hi,

I discovered that the lvm text files recovered from lost+found where out 
of date. I got as far as reconstructing such a lvm config file from data 
stored in the first MB, but after some syntax errors and values that 
didn't look consistent, I decided to do a re-install.

fortunately, didn't lose much data.

thank you both for your help, provided valuable insight.

cu


On 13/05/2019 10:37, Zdenek Kabelac wrote:
> Dne 12. 05. 19 v 0:52 "Rainer F�genstein" napsal(a):
>> hi,
>>
>> I am (was) using Fedora 28 installed in several LVs on /dev/sda5 (= PV),
>> where sda is a "big" SSD.
>>
>> by accident, I attached (via SATA hot swap bay) an old harddisk
>> (/dev/sdc1), which was used about 2 months temporarily to move the volume
>> group / logical volumes from the "old" SSD to the "new" SSD (pvadd,
>> pvmove, ...)
>>
> 
> Hi
> 
> I don't understand how this could have happened by accident.
> lvm2 provides strong detection of duplicated devices.
> It also detects older metadata.
> 
> So you would have to put in 'exact' but just old 'copy' of your device
> and at the same time drop out the original one -� is that what you've 
> made ??
> 
>> this combination of old PV and new PV messed up the filesystems. when I
>> noticed the mistake, I did a shutdown and physically removed /dev/sdc.
>> this also removed VG and LVs on /dev/sda5, causing the system crash on
>> boot.
>>
>>
>> [root@localhost-live ~]# pvs
>> �� PV�������� VG������������� Fmt� Attr PSize��� PFree
>> �� /dev/sda5����������������� lvm2 ---�� <47.30g <47.30g
>> �� /dev/sdc1� fedora_sh64���� lvm2 a--� <298.09g 273.30g
>>
>> is there any chance to get VG and LVs back?
> 
> 
> VG & LV are just 'terms' - there is no 'physical-content' behind them - 
> so if you've already used your filesystem and modified it's bits on a 
> device - the physical content of your storage is simply overwritten and 
> there is no way to recover it's content by just fixing lvm2 metadata.
> 
> lvm2 provides command:� 'vgcfgrestore' - which can restore your older 
> metadata content (description which devices are used and where the 
> individual LVs use their extents - basically mapping of blocks) - 
> typically in your /etc/lvm/archive directory - and in the worst case - 
> you can obtain older metadata by scanning 1st. MiB of your physical 
> drive - data are there in ascii format in ring buffer so for your small 
> set of LVs you likely should have there full history.
> 
> When you put back your original 'drive set' - and you restore your lvm2 
> metadata to the point before you started to play with bad drive - then 
> your only hope is properly working 'fsck' - but there is nothing how 
> lvm2 can help with this.
> 
> Regards
> 
> Zdenek
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-05-15 16:55 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-11 22:52 [linux-lvm] recover volume group & locical volumes from PV? "Rainer Fügenstein"
2019-05-12  0:09 ` Roger Heflin
2019-05-13  8:37 ` Zdenek Kabelac
2019-05-13 15:17   ` Rainer Fügenstein
2019-05-15 16:55   ` Rainer Fügenstein

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).