linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Jeff Allison <jeff.allison@allygray.2y.net>
To: linux-lvm@redhat.com
Subject: [linux-lvm] How to trash a broke VG
Date: Fri, 3 Aug 2018 15:21:36 +1000	[thread overview]
Message-ID: <CAPrpM6xO4x2AuD2-BbDRbcVxRUqWTsecqyikhBhvN-ioT=3jnw@mail.gmail.com> (raw)

OK Chaps I've broken it.

I have a VG containing one LV and made up of 3 live disks and 2 failed disks.

Whilst the disks were failing I attempted to move date off the failing
disks, which failed so I now have a pvmove0 that won't go away either.


So if I attempt to even remove a live disk I get an error.

[root@nas ~]# vgreduce -v vg_backup /dev/sdi1
    Using physical volume(s) on command line.
    Wiping cache of LVM-capable devices
    Wiping internal VG cache
  Couldn't find device with uuid eFMoUW-6Ml5-fTyn-E2cT-sFXu-kYla-MmiAUV.
  Couldn't find device with uuid LxXhsb-Mgag-ESXZ-fPP6-52dE-iNp2-uKdLwu.
    There are 2 physical volumes missing.
  Cannot change VG vg_backup while PVs are missing.
  Consider vgreduce --removemissing.
    There are 2 physical volumes missing.
  Cannot process volume group vg_backup
  Failed to find physical volume "/dev/sdi1".

Then if I attempt a vgreduce --removemissing I get

[root@nas ~]# vgreduce --removemissing vg_backup
  Couldn't find device with uuid eFMoUW-6Ml5-fTyn-E2cT-sFXu-kYla-MmiAUV.
  Couldn't find device with uuid LxXhsb-Mgag-ESXZ-fPP6-52dE-iNp2-uKdLwu.
  WARNING: Partial LV lv_backup needs to be repaired or removed.
  WARNING: Partial LV pvmove0 needs to be repaired or removed.
  There are still partial LVs in VG vg_backup.
  To remove them unconditionally use: vgreduce --removemissing --force.
  Proceeding to remove empty missing PVs.

So I try force
[root@nas ~]# vgreduce --removemissing --force vg_backup
  Couldn't find device with uuid eFMoUW-6Ml5-fTyn-E2cT-sFXu-kYla-MmiAUV.
  Couldn't find device with uuid LxXhsb-Mgag-ESXZ-fPP6-52dE-iNp2-uKdLwu.
  Removing partial LV lv_backup.
  Can't remove locked LV lv_backup.

So no go.

If I try lvremove pvmove0

[root@nas ~]# lvremove -v pvmove0
    Using logical volume(s) on command line.
    VG name on command line not found in list of VGs: pvmove0
    Wiping cache of LVM-capable devices
  Volume group "pvmove0" not found
  Cannot process volume group pvmove0

So Heeelp I seem to be caught in some kind of loop.

             reply	other threads:[~2018-08-03  5:21 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-03  5:21 Jeff Allison [this message]
2018-08-03 13:18 ` [linux-lvm] How to trash a broke VG Roger Heflin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPrpM6xO4x2AuD2-BbDRbcVxRUqWTsecqyikhBhvN-ioT=3jnw@mail.gmail.com' \
    --to=jeff.allison@allygray.2y.net \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).