linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Zdenek Kabelac <zkabelac@redhat.com>
To: "LVM general discussion and development" <linux-lvm@redhat.com>,
	"Rainer Fügenstein" <rfu@oudeis.org>
Subject: Re: [linux-lvm] recover volume group & locical volumes from PV?
Date: Mon, 13 May 2019 10:37:39 +0200	[thread overview]
Message-ID: <6747eb8e-b15c-dfdd-610b-bc5143368a63@redhat.com> (raw)
In-Reply-To: <3f6f98d1b5184853cf48b1f84337af83.squirrel@service.net-works.at>

Dne 12. 05. 19 v 0:52 "Rainer F�genstein" napsal(a):
> hi,
> 
> I am (was) using Fedora 28 installed in several LVs on /dev/sda5 (= PV),
> where sda is a "big" SSD.
> 
> by accident, I attached (via SATA hot swap bay) an old harddisk
> (/dev/sdc1), which was used about 2 months temporarily to move the volume
> group / logical volumes from the "old" SSD to the "new" SSD (pvadd,
> pvmove, ...)
> 

Hi

I don't understand how this could have happened by accident.
lvm2 provides strong detection of duplicated devices.
It also detects older metadata.

So you would have to put in 'exact' but just old 'copy' of your device
and at the same time drop out the original one -  is that what you've made ??

> this combination of old PV and new PV messed up the filesystems. when I
> noticed the mistake, I did a shutdown and physically removed /dev/sdc.
> this also removed VG and LVs on /dev/sda5, causing the system crash on
> boot.
> 
> 
> [root@localhost-live ~]# pvs
>    PV         VG              Fmt  Attr PSize    PFree
>    /dev/sda5                  lvm2 ---   <47.30g <47.30g
>    /dev/sdc1  fedora_sh64     lvm2 a--  <298.09g 273.30g
> 
> is there any chance to get VG and LVs back?


VG & LV are just 'terms' - there is no 'physical-content' behind them - so if 
you've already used your filesystem and modified it's bits on a device - the 
physical content of your storage is simply overwritten and there is no way to 
recover it's content by just fixing lvm2 metadata.

lvm2 provides command:  'vgcfgrestore' - which can restore your older metadata 
content (description which devices are used and where the individual LVs use 
their extents - basically mapping of blocks) - typically in your 
/etc/lvm/archive directory - and in the worst case - you can obtain older 
metadata by scanning 1st. MiB of your physical drive - data are there in ascii 
format in ring buffer so for your small set of LVs you likely should have 
there full history.

When you put back your original 'drive set' - and you restore your lvm2 
metadata to the point before you started to play with bad drive - then your 
only hope is properly working 'fsck' - but there is nothing how lvm2 can help 
with this.

Regards

Zdenek

  parent reply	other threads:[~2019-05-13  8:37 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-11 22:52 [linux-lvm] recover volume group & locical volumes from PV? "Rainer Fügenstein"
2019-05-12  0:09 ` Roger Heflin
2019-05-13  8:37 ` Zdenek Kabelac [this message]
2019-05-13 15:17   ` Rainer Fügenstein
2019-05-15 16:55   ` Rainer Fügenstein

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6747eb8e-b15c-dfdd-610b-bc5143368a63@redhat.com \
    --to=zkabelac@redhat.com \
    --cc=linux-lvm@redhat.com \
    --cc=rfu@oudeis.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).