archive mirror
 help / color / mirror / Atom feed
From: Andrew Falgout <>
Subject: [linux-lvm] lvm raid5 : drives all present but vg/lvm will not assemble
Date: Fri, 20 Mar 2020 22:22:04 -0500	[thread overview]
Message-ID: <> (raw)

[-- Attachment #1: Type: text/plain, Size: 2598 bytes --]

This started on a Raspberry PI 4 running raspbian.  I moved the disks to my
Fedora 31 system, that is running the latest updates and kernel.  When I
had the same issues there I knew it wasn't raspbian.

I've reached the end of my rope on this. The disks are there, all three are
accounted for, and the LVM data on them can be seen.  But it refuses to
activate stating I/O errors.

[root@hypervisor01 ~]# pvs
  PV         VG                Fmt  Attr PSize    PFree
  /dev/sda1  local_storage01   lvm2 a--  <931.51g       0
  /dev/sdb1  local_storage01   lvm2 a--  <931.51g       0
  /dev/sdc1  local_storage01   lvm2 a--  <931.51g       0
  /dev/sdd1  local_storage01   lvm2 a--  <931.51g       0
  /dev/sde1  local_storage01   lvm2 a--  <931.51g       0
  /dev/sdf1  local_storage01   lvm2 a--  <931.51g <931.51g
  /dev/sdg1  local_storage01   lvm2 a--  <931.51g <931.51g
  /dev/sdh1  local_storage01   lvm2 a--  <931.51g <931.51g
  /dev/sdi3  fedora_hypervisor lvm2 a--    27.33g   <9.44g
  /dev/sdk1  vg1               lvm2 a--    <7.28t       0
  /dev/sdl1  vg1               lvm2 a--    <7.28t       0
  /dev/sdm1  vg1               lvm2 a--    <7.28t       0
[root@hypervisor01 ~]# vgs
  VG                #PV #LV #SN Attr   VSize  VFree
  fedora_hypervisor   1   2   0 wz--n- 27.33g <9.44g
  local_storage01     8   1   0 wz--n- <7.28t <2.73t
  vg1                 3   1   0 wz--n- 21.83t     0
[root@hypervisor01 ~]# lvs
  LV        VG                Attr       LSize  Pool Origin Data%  Meta%
 Move Log Cpy%Sync Convert
  root      fedora_hypervisor -wi-ao---- 15.00g

  swap      fedora_hypervisor -wi-ao----  2.89g

  libvirt   local_storage01   rwi-aor--- <2.73t
  gluster02 vg1               Rwi---r--- 14.55t

The one in question is the vg1/gluster02 lvm group.

I try to activate the VG:
[root@hypervisor01 ~]# vgchange -ay vg1
  device-mapper: reload ioctl on  (253:19) failed: Input/output error
  0 logical volume(s) in volume group "vg1" now active

I've got the debugging output from :
vgchange -ay vg1 -vvvv -dddd
lvchange -ay --partial vg1/gluster02 -vvvv -dddd

Just not sure where I should dump the data for people to look at.  Is there
a way to tell the md system to ignore the metadata since there wasn't an
actual disk failure, and rebuild the metadata off what is in the lvm?  Or
can I even get the LV to mount, so I can pull the data off.

Any help is appreciated.  If I can save the data great.  I'm tossing this
to the community to see if anyone else has an idea of what I can do.

[-- Attachment #2: Type: text/html, Size: 3531 bytes --]

             reply	other threads:[~2020-03-21  3:22 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-21  3:22 Andrew Falgout [this message]
2020-03-23  7:46 ` [linux-lvm] lvm raid5 : drives all present but vg/lvm will not assemble Bernd Eckenfels
2020-03-23 15:58   ` Roger Heflin
2020-03-24 16:09   ` Andrew Falgout
2020-03-31  4:56     ` Andrew Falgout

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).