linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] lvm raid5 : drives all present but vg/lvm will not assemble
@ 2020-03-21  3:22 Andrew Falgout
  2020-03-23  7:46 ` Bernd Eckenfels
  0 siblings, 1 reply; 5+ messages in thread
From: Andrew Falgout @ 2020-03-21  3:22 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 2598 bytes --]

This started on a Raspberry PI 4 running raspbian.  I moved the disks to my
Fedora 31 system, that is running the latest updates and kernel.  When I
had the same issues there I knew it wasn't raspbian.

I've reached the end of my rope on this. The disks are there, all three are
accounted for, and the LVM data on them can be seen.  But it refuses to
activate stating I/O errors.

[root@hypervisor01 ~]# pvs
  PV         VG                Fmt  Attr PSize    PFree
  /dev/sda1  local_storage01   lvm2 a--  <931.51g       0
  /dev/sdb1  local_storage01   lvm2 a--  <931.51g       0
  /dev/sdc1  local_storage01   lvm2 a--  <931.51g       0
  /dev/sdd1  local_storage01   lvm2 a--  <931.51g       0
  /dev/sde1  local_storage01   lvm2 a--  <931.51g       0
  /dev/sdf1  local_storage01   lvm2 a--  <931.51g <931.51g
  /dev/sdg1  local_storage01   lvm2 a--  <931.51g <931.51g
  /dev/sdh1  local_storage01   lvm2 a--  <931.51g <931.51g
  /dev/sdi3  fedora_hypervisor lvm2 a--    27.33g   <9.44g
  /dev/sdk1  vg1               lvm2 a--    <7.28t       0
  /dev/sdl1  vg1               lvm2 a--    <7.28t       0
  /dev/sdm1  vg1               lvm2 a--    <7.28t       0
[root@hypervisor01 ~]# vgs
  VG                #PV #LV #SN Attr   VSize  VFree
  fedora_hypervisor   1   2   0 wz--n- 27.33g <9.44g
  local_storage01     8   1   0 wz--n- <7.28t <2.73t
  vg1                 3   1   0 wz--n- 21.83t     0
[root@hypervisor01 ~]# lvs
  LV        VG                Attr       LSize  Pool Origin Data%  Meta%
 Move Log Cpy%Sync Convert
  root      fedora_hypervisor -wi-ao---- 15.00g

  swap      fedora_hypervisor -wi-ao----  2.89g

  libvirt   local_storage01   rwi-aor--- <2.73t
       100.00
  gluster02 vg1               Rwi---r--- 14.55t


The one in question is the vg1/gluster02 lvm group.

I try to activate the VG:
[root@hypervisor01 ~]# vgchange -ay vg1
  device-mapper: reload ioctl on  (253:19) failed: Input/output error
  0 logical volume(s) in volume group "vg1" now active

I've got the debugging output from :
vgchange -ay vg1 -vvvv -dddd
lvchange -ay --partial vg1/gluster02 -vvvv -dddd

Just not sure where I should dump the data for people to look at.  Is there
a way to tell the md system to ignore the metadata since there wasn't an
actual disk failure, and rebuild the metadata off what is in the lvm?  Or
can I even get the LV to mount, so I can pull the data off.

Any help is appreciated.  If I can save the data great.  I'm tossing this
to the community to see if anyone else has an idea of what I can do.
./digitalw00t

[-- Attachment #2: Type: text/html, Size: 3531 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] lvm raid5 : drives all present but vg/lvm will not assemble
  2020-03-21  3:22 [linux-lvm] lvm raid5 : drives all present but vg/lvm will not assemble Andrew Falgout
@ 2020-03-23  7:46 ` Bernd Eckenfels
  2020-03-23 15:58   ` Roger Heflin
  2020-03-24 16:09   ` Andrew Falgout
  0 siblings, 2 replies; 5+ messages in thread
From: Bernd Eckenfels @ 2020-03-23  7:46 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 3068 bytes --]

Do you see any dmesg kernel errors when you try to activate the LVs?

Gruss
Bernd


--
http://bernd.eckenfels.net
________________________________
Von: linux-lvm-bounces@redhat.com <linux-lvm-bounces@redhat.com> im Auftrag von Andrew Falgout <digitalw00t@gmail.com>
Gesendet: Saturday, March 21, 2020 4:22:04 AM
An: linux-lvm@redhat.com <linux-lvm@redhat.com>
Betreff: [linux-lvm] lvm raid5 : drives all present but vg/lvm will not assemble


This started on a Raspberry PI 4 running raspbian.  I moved the disks to my Fedora 31 system, that is running the latest updates and kernel.  When I had the same issues there I knew it wasn't raspbian.

I've reached the end of my rope on this. The disks are there, all three are accounted for, and the LVM data on them can be seen.  But it refuses to activate stating I/O errors.

[root@hypervisor01 ~]# pvs
  PV         VG                Fmt  Attr PSize    PFree
  /dev/sda1  local_storage01   lvm2 a--  <931.51g       0
  /dev/sdb1  local_storage01   lvm2 a--  <931.51g       0
  /dev/sdc1  local_storage01   lvm2 a--  <931.51g       0
  /dev/sdd1  local_storage01   lvm2 a--  <931.51g       0
  /dev/sde1  local_storage01   lvm2 a--  <931.51g       0
  /dev/sdf1  local_storage01   lvm2 a--  <931.51g <931.51g
  /dev/sdg1  local_storage01   lvm2 a--  <931.51g <931.51g
  /dev/sdh1  local_storage01   lvm2 a--  <931.51g <931.51g
  /dev/sdi3  fedora_hypervisor lvm2 a--    27.33g   <9.44g
  /dev/sdk1  vg1               lvm2 a--    <7.28t       0
  /dev/sdl1  vg1               lvm2 a--    <7.28t       0
  /dev/sdm1  vg1               lvm2 a--    <7.28t       0
[root@hypervisor01 ~]# vgs
  VG                #PV #LV #SN Attr   VSize  VFree
  fedora_hypervisor   1   2   0 wz--n- 27.33g <9.44g
  local_storage01     8   1   0 wz--n- <7.28t <2.73t
  vg1                 3   1   0 wz--n- 21.83t     0
[root@hypervisor01 ~]# lvs
  LV        VG                Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root      fedora_hypervisor -wi-ao---- 15.00g
  swap      fedora_hypervisor -wi-ao----  2.89g
  libvirt   local_storage01   rwi-aor--- <2.73t                                    100.00
  gluster02 vg1               Rwi---r--- 14.55t

The one in question is the vg1/gluster02 lvm group.

I try to activate the VG:
[root@hypervisor01 ~]# vgchange -ay vg1
  device-mapper: reload ioctl on  (253:19) failed: Input/output error
  0 logical volume(s) in volume group "vg1" now active

I've got the debugging output from :
vgchange -ay vg1 -vvvv -dddd
lvchange -ay --partial vg1/gluster02 -vvvv -dddd

Just not sure where I should dump the data for people to look at.  Is there a way to tell the md system to ignore the metadata since there wasn't an actual disk failure, and rebuild the metadata off what is in the lvm?  Or can I even get the LV to mount, so I can pull the data off.

Any help is appreciated.  If I can save the data great.  I'm tossing this to the community to see if anyone else has an idea of what I can do.
./digitalw00t

[-- Attachment #2: Type: text/html, Size: 6132 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] lvm raid5 : drives all present but vg/lvm will not assemble
  2020-03-23  7:46 ` Bernd Eckenfels
@ 2020-03-23 15:58   ` Roger Heflin
  2020-03-24 16:09   ` Andrew Falgout
  1 sibling, 0 replies; 5+ messages in thread
From: Roger Heflin @ 2020-03-23 15:58 UTC (permalink / raw)
  To: LVM general discussion and development

cat /proc/mdstat, the 253,19 is likely an /dev/mdX device and to get
an io error like that it has to be in a wrong state.

On Mon, Mar 23, 2020 at 5:14 AM Bernd Eckenfels <ecki@zusammenkunft.net> wrote:
>
> Do you see any dmesg kernel errors when you try to activate the LVs?
>
> Gruss
> Bernd
>
>
> --
> http://bernd.eckenfels.net
> ________________________________
> Von: linux-lvm-bounces@redhat.com <linux-lvm-bounces@redhat.com> im Auftrag von Andrew Falgout <digitalw00t@gmail.com>
> Gesendet: Saturday, March 21, 2020 4:22:04 AM
> An: linux-lvm@redhat.com <linux-lvm@redhat.com>
> Betreff: [linux-lvm] lvm raid5 : drives all present but vg/lvm will not assemble
>
>
> This started on a Raspberry PI 4 running raspbian.  I moved the disks to my Fedora 31 system, that is running the latest updates and kernel.  When I had the same issues there I knew it wasn't raspbian.
>
> I've reached the end of my rope on this. The disks are there, all three are accounted for, and the LVM data on them can be seen.  But it refuses to activate stating I/O errors.
>
> [root@hypervisor01 ~]# pvs
>   PV         VG                Fmt  Attr PSize    PFree
>   /dev/sda1  local_storage01   lvm2 a--  <931.51g       0
>   /dev/sdb1  local_storage01   lvm2 a--  <931.51g       0
>   /dev/sdc1  local_storage01   lvm2 a--  <931.51g       0
>   /dev/sdd1  local_storage01   lvm2 a--  <931.51g       0
>   /dev/sde1  local_storage01   lvm2 a--  <931.51g       0
>   /dev/sdf1  local_storage01   lvm2 a--  <931.51g <931.51g
>   /dev/sdg1  local_storage01   lvm2 a--  <931.51g <931.51g
>   /dev/sdh1  local_storage01   lvm2 a--  <931.51g <931.51g
>   /dev/sdi3  fedora_hypervisor lvm2 a--    27.33g   <9.44g
>   /dev/sdk1  vg1               lvm2 a--    <7.28t       0
>   /dev/sdl1  vg1               lvm2 a--    <7.28t       0
>   /dev/sdm1  vg1               lvm2 a--    <7.28t       0
> [root@hypervisor01 ~]# vgs
>   VG                #PV #LV #SN Attr   VSize  VFree
>   fedora_hypervisor   1   2   0 wz--n- 27.33g <9.44g
>   local_storage01     8   1   0 wz--n- <7.28t <2.73t
>   vg1                 3   1   0 wz--n- 21.83t     0
> [root@hypervisor01 ~]# lvs
>   LV        VG                Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>   root      fedora_hypervisor -wi-ao---- 15.00g
>   swap      fedora_hypervisor -wi-ao----  2.89g
>   libvirt   local_storage01   rwi-aor--- <2.73t                                    100.00
>   gluster02 vg1               Rwi---r--- 14.55t
>
> The one in question is the vg1/gluster02 lvm group.
>
> I try to activate the VG:
> [root@hypervisor01 ~]# vgchange -ay vg1
>   device-mapper: reload ioctl on  (253:19) failed: Input/output error
>   0 logical volume(s) in volume group "vg1" now active
>
> I've got the debugging output from :
> vgchange -ay vg1 -vvvv -dddd
> lvchange -ay --partial vg1/gluster02 -vvvv -dddd
>
> Just not sure where I should dump the data for people to look at.  Is there a way to tell the md system to ignore the metadata since there wasn't an actual disk failure, and rebuild the metadata off what is in the lvm?  Or can I even get the LV to mount, so I can pull the data off.
>
> Any help is appreciated.  If I can save the data great.  I'm tossing this to the community to see if anyone else has an idea of what I can do.
> ./digitalw00t
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] lvm raid5 : drives all present but vg/lvm will not assemble
  2020-03-23  7:46 ` Bernd Eckenfels
  2020-03-23 15:58   ` Roger Heflin
@ 2020-03-24 16:09   ` Andrew Falgout
  2020-03-31  4:56     ` Andrew Falgout
  1 sibling, 1 reply; 5+ messages in thread
From: Andrew Falgout @ 2020-03-24 16:09 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 4662 bytes --]

The disks are seen, the volume groups are seen.  When I try to activate the
VG I get this:

vgchange -ay vg1
  device-mapper: reload ioctl on  (253:19) failed: Input/output error
  0 logical volume(s) in volume group "vg1" now active

I executed 'vgchange -ay vg1 -vvvv -dddd' and this is the only time an
error was thrown.
20:53:16.552602 vgchange[10795] device_mapper/libdm-deptree.c:2921  Adding
target to (253:19): 0 31256068096 raid raid5_ls 3 128 region_size 32768 3
253:13 253:14 253:15 253:16 253:17 253:18
20:53:16.552609 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1853  dm
table   (253:19) [ opencount flush ]   [16384] (*1)
20:53:16.552619 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1853  dm
reload   (253:19) [ noopencount flush ]   [16384] (*1)
20:53:16.572481 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1903
 device-mapper: reload ioctl on  (253:19) failed: Input/output error

I've uploaded two very verbose and debug ridden.
https://pastebin.com/bw5eQBa8
https://pastebin.com/qV5yft05

Ignore the naming.  It's not a gluster.  I was planning on making two and
mirroring them in a gluster.

./drae


On Mon, Mar 23, 2020 at 5:14 AM Bernd Eckenfels <ecki@zusammenkunft.net>
wrote:

> Do you see any dmesg kernel errors when you try to activate the LVs?
>
> Gruss
> Bernd
>
>
> --
> http://bernd.eckenfels.net
> ------------------------------
> *Von:* linux-lvm-bounces@redhat.com <linux-lvm-bounces@redhat.com> im
> Auftrag von Andrew Falgout <digitalw00t@gmail.com>
> *Gesendet:* Saturday, March 21, 2020 4:22:04 AM
> *An:* linux-lvm@redhat.com <linux-lvm@redhat.com>
> *Betreff:* [linux-lvm] lvm raid5 : drives all present but vg/lvm will not
> assemble
>
>
> This started on a Raspberry PI 4 running raspbian.  I moved the disks to
> my Fedora 31 system, that is running the latest updates and kernel.  When I
> had the same issues there I knew it wasn't raspbian.
>
> I've reached the end of my rope on this. The disks are there, all three
> are accounted for, and the LVM data on them can be seen.  But it refuses to
> activate stating I/O errors.
>
> [root@hypervisor01 ~]# pvs
>   PV         VG                Fmt  Attr PSize    PFree
>   /dev/sda1  local_storage01   lvm2 a--  <931.51g       0
>   /dev/sdb1  local_storage01   lvm2 a--  <931.51g       0
>   /dev/sdc1  local_storage01   lvm2 a--  <931.51g       0
>   /dev/sdd1  local_storage01   lvm2 a--  <931.51g       0
>   /dev/sde1  local_storage01   lvm2 a--  <931.51g       0
>   /dev/sdf1  local_storage01   lvm2 a--  <931.51g <931.51g
>   /dev/sdg1  local_storage01   lvm2 a--  <931.51g <931.51g
>   /dev/sdh1  local_storage01   lvm2 a--  <931.51g <931.51g
>   /dev/sdi3  fedora_hypervisor lvm2 a--    27.33g   <9.44g
>   /dev/sdk1  vg1               lvm2 a--    <7.28t       0
>   /dev/sdl1  vg1               lvm2 a--    <7.28t       0
>   /dev/sdm1  vg1               lvm2 a--    <7.28t       0
> [root@hypervisor01 ~]# vgs
>   VG                #PV #LV #SN Attr   VSize  VFree
>   fedora_hypervisor   1   2   0 wz--n- 27.33g <9.44g
>   local_storage01     8   1   0 wz--n- <7.28t <2.73t
>   vg1                 3   1   0 wz--n- 21.83t     0
> [root@hypervisor01 ~]# lvs
>   LV        VG                Attr       LSize  Pool Origin Data%  Meta%
>  Move Log Cpy%Sync Convert
>   root      fedora_hypervisor -wi-ao---- 15.00g
>
>   swap      fedora_hypervisor -wi-ao----  2.89g
>
>   libvirt   local_storage01   rwi-aor--- <2.73t
>          100.00
>   gluster02 vg1               Rwi---r--- 14.55t
>
>
> The one in question is the vg1/gluster02 lvm group.
>
> I try to activate the VG:
> [root@hypervisor01 ~]# vgchange -ay vg1
>   device-mapper: reload ioctl on  (253:19) failed: Input/output error
>   0 logical volume(s) in volume group "vg1" now active
>
> I've got the debugging output from :
> vgchange -ay vg1 -vvvv -dddd
> lvchange -ay --partial vg1/gluster02 -vvvv -dddd
>
> Just not sure where I should dump the data for people to look at.  Is
> there a way to tell the md system to ignore the metadata since there wasn't
> an actual disk failure, and rebuild the metadata off what is in the lvm?
> Or can I even get the LV to mount, so I can pull the data off.
>
> Any help is appreciated.  If I can save the data great.  I'm tossing this
> to the community to see if anyone else has an idea of what I can do.
> ./digitalw00t
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[-- Attachment #2: Type: text/html, Size: 7439 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] lvm raid5 : drives all present but vg/lvm will not assemble
  2020-03-24 16:09   ` Andrew Falgout
@ 2020-03-31  4:56     ` Andrew Falgout
  0 siblings, 0 replies; 5+ messages in thread
From: Andrew Falgout @ 2020-03-31  4:56 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 4954 bytes --]

Anyone have any ideas how I can salvage this array?

./drae


On Tue, Mar 24, 2020 at 11:09 AM Andrew Falgout <digitalw00t@gmail.com>
wrote:

> The disks are seen, the volume groups are seen.  When I try to activate
> the VG I get this:
>
> vgchange -ay vg1
>   device-mapper: reload ioctl on  (253:19) failed: Input/output error
>   0 logical volume(s) in volume group "vg1" now active
>
> I executed 'vgchange -ay vg1 -vvvv -dddd' and this is the only time an
> error was thrown.
> 20:53:16.552602 vgchange[10795] device_mapper/libdm-deptree.c:2921  Adding
> target to (253:19): 0 31256068096 raid raid5_ls 3 128 region_size 32768 3
> 253:13 253:14 253:15 253:16 253:17 253:18
> 20:53:16.552609 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1853  dm
> table   (253:19) [ opencount flush ]   [16384] (*1)
> 20:53:16.552619 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1853  dm
> reload   (253:19) [ noopencount flush ]   [16384] (*1)
> 20:53:16.572481 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1903
>  device-mapper: reload ioctl on  (253:19) failed: Input/output error
>
> I've uploaded two very verbose and debug ridden.
> https://pastebin.com/bw5eQBa8
> https://pastebin.com/qV5yft05
>
> Ignore the naming.  It's not a gluster.  I was planning on making two and
> mirroring them in a gluster.
>
> ./drae
>
>
> On Mon, Mar 23, 2020 at 5:14 AM Bernd Eckenfels <ecki@zusammenkunft.net>
> wrote:
>
>> Do you see any dmesg kernel errors when you try to activate the LVs?
>>
>> Gruss
>> Bernd
>>
>>
>> --
>> http://bernd.eckenfels.net
>> ------------------------------
>> *Von:* linux-lvm-bounces@redhat.com <linux-lvm-bounces@redhat.com> im
>> Auftrag von Andrew Falgout <digitalw00t@gmail.com>
>> *Gesendet:* Saturday, March 21, 2020 4:22:04 AM
>> *An:* linux-lvm@redhat.com <linux-lvm@redhat.com>
>> *Betreff:* [linux-lvm] lvm raid5 : drives all present but vg/lvm will
>> not assemble
>>
>>
>> This started on a Raspberry PI 4 running raspbian.  I moved the disks to
>> my Fedora 31 system, that is running the latest updates and kernel.  When I
>> had the same issues there I knew it wasn't raspbian.
>>
>> I've reached the end of my rope on this. The disks are there, all three
>> are accounted for, and the LVM data on them can be seen.  But it refuses to
>> activate stating I/O errors.
>>
>> [root@hypervisor01 ~]# pvs
>>   PV         VG                Fmt  Attr PSize    PFree
>>   /dev/sda1  local_storage01   lvm2 a--  <931.51g       0
>>   /dev/sdb1  local_storage01   lvm2 a--  <931.51g       0
>>   /dev/sdc1  local_storage01   lvm2 a--  <931.51g       0
>>   /dev/sdd1  local_storage01   lvm2 a--  <931.51g       0
>>   /dev/sde1  local_storage01   lvm2 a--  <931.51g       0
>>   /dev/sdf1  local_storage01   lvm2 a--  <931.51g <931.51g
>>   /dev/sdg1  local_storage01   lvm2 a--  <931.51g <931.51g
>>   /dev/sdh1  local_storage01   lvm2 a--  <931.51g <931.51g
>>   /dev/sdi3  fedora_hypervisor lvm2 a--    27.33g   <9.44g
>>   /dev/sdk1  vg1               lvm2 a--    <7.28t       0
>>   /dev/sdl1  vg1               lvm2 a--    <7.28t       0
>>   /dev/sdm1  vg1               lvm2 a--    <7.28t       0
>> [root@hypervisor01 ~]# vgs
>>   VG                #PV #LV #SN Attr   VSize  VFree
>>   fedora_hypervisor   1   2   0 wz--n- 27.33g <9.44g
>>   local_storage01     8   1   0 wz--n- <7.28t <2.73t
>>   vg1                 3   1   0 wz--n- 21.83t     0
>> [root@hypervisor01 ~]# lvs
>>   LV        VG                Attr       LSize  Pool Origin Data%  Meta%
>>  Move Log Cpy%Sync Convert
>>   root      fedora_hypervisor -wi-ao---- 15.00g
>>
>>   swap      fedora_hypervisor -wi-ao----  2.89g
>>
>>   libvirt   local_storage01   rwi-aor--- <2.73t
>>          100.00
>>   gluster02 vg1               Rwi---r--- 14.55t
>>
>>
>> The one in question is the vg1/gluster02 lvm group.
>>
>> I try to activate the VG:
>> [root@hypervisor01 ~]# vgchange -ay vg1
>>   device-mapper: reload ioctl on  (253:19) failed: Input/output error
>>   0 logical volume(s) in volume group "vg1" now active
>>
>> I've got the debugging output from :
>> vgchange -ay vg1 -vvvv -dddd
>> lvchange -ay --partial vg1/gluster02 -vvvv -dddd
>>
>> Just not sure where I should dump the data for people to look at.  Is
>> there a way to tell the md system to ignore the metadata since there wasn't
>> an actual disk failure, and rebuild the metadata off what is in the lvm?
>> Or can I even get the LV to mount, so I can pull the data off.
>>
>> Any help is appreciated.  If I can save the data great.  I'm tossing this
>> to the community to see if anyone else has an idea of what I can do.
>> ./digitalw00t
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>

[-- Attachment #2: Type: text/html, Size: 7997 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-03-31  4:56 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-21  3:22 [linux-lvm] lvm raid5 : drives all present but vg/lvm will not assemble Andrew Falgout
2020-03-23  7:46 ` Bernd Eckenfels
2020-03-23 15:58   ` Roger Heflin
2020-03-24 16:09   ` Andrew Falgout
2020-03-31  4:56     ` Andrew Falgout

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).