archive mirror
 help / color / mirror / Atom feed
From: "Rainer Fügenstein" <>
Subject: Re: [linux-lvm] recover volume group & locical volumes from PV?
Date: Wed, 15 May 2019 18:55:37 +0200	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>


I discovered that the lvm text files recovered from lost+found where out 
of date. I got as far as reconstructing such a lvm config file from data 
stored in the first MB, but after some syntax errors and values that 
didn't look consistent, I decided to do a re-install.

fortunately, didn't lose much data.

thank you both for your help, provided valuable insight.


On 13/05/2019 10:37, Zdenek Kabelac wrote:
> Dne 12. 05. 19 v 0:52 "Rainer F�genstein" napsal(a):
>> hi,
>> I am (was) using Fedora 28 installed in several LVs on /dev/sda5 (= PV),
>> where sda is a "big" SSD.
>> by accident, I attached (via SATA hot swap bay) an old harddisk
>> (/dev/sdc1), which was used about 2 months temporarily to move the volume
>> group / logical volumes from the "old" SSD to the "new" SSD (pvadd,
>> pvmove, ...)
> Hi
> I don't understand how this could have happened by accident.
> lvm2 provides strong detection of duplicated devices.
> It also detects older metadata.
> So you would have to put in 'exact' but just old 'copy' of your device
> and at the same time drop out the original one -� is that what you've 
> made ??
>> this combination of old PV and new PV messed up the filesystems. when I
>> noticed the mistake, I did a shutdown and physically removed /dev/sdc.
>> this also removed VG and LVs on /dev/sda5, causing the system crash on
>> boot.
>> [root@localhost-live ~]# pvs
>> �� PV�������� VG������������� Fmt� Attr PSize��� PFree
>> �� /dev/sda5����������������� lvm2 ---�� <47.30g <47.30g
>> �� /dev/sdc1� fedora_sh64���� lvm2 a--� <298.09g 273.30g
>> is there any chance to get VG and LVs back?
> VG & LV are just 'terms' - there is no 'physical-content' behind them - 
> so if you've already used your filesystem and modified it's bits on a 
> device - the physical content of your storage is simply overwritten and 
> there is no way to recover it's content by just fixing lvm2 metadata.
> lvm2 provides command:� 'vgcfgrestore' - which can restore your older 
> metadata content (description which devices are used and where the 
> individual LVs use their extents - basically mapping of blocks) - 
> typically in your /etc/lvm/archive directory - and in the worst case - 
> you can obtain older metadata by scanning 1st. MiB of your physical 
> drive - data are there in ascii format in ring buffer so for your small 
> set of LVs you likely should have there full history.
> When you put back your original 'drive set' - and you restore your lvm2 
> metadata to the point before you started to play with bad drive - then 
> your only hope is properly working 'fsck' - but there is nothing how 
> lvm2 can help with this.
> Regards
> Zdenek
> _______________________________________________
> linux-lvm mailing list
> read the LVM HOW-TO at

      parent reply	other threads:[~2019-05-15 16:55 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-11 22:52 [linux-lvm] recover volume group & locical volumes from PV? "Rainer Fügenstein"
2019-05-12  0:09 ` Roger Heflin
2019-05-13  8:37 ` Zdenek Kabelac
2019-05-13 15:17   ` Rainer Fügenstein
2019-05-15 16:55   ` Rainer Fügenstein [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).