From: Andrew Falgout <digitalw00t@gmail.com>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] lvm raid5 : drives all present but vg/lvm will not assemble
Date: Mon, 30 Mar 2020 23:56:10 -0500 [thread overview]
Message-ID: <CAHRMh5X1L1xtNyLO2t5yDvJvTm1++d4N8M7UhXnPAGYP79t8zA@mail.gmail.com> (raw)
In-Reply-To: <CAHRMh5X9c=5jKP6-ixX8GBeQSBRXcicFiHGLFzx3V8+zc3Z3mg@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 4954 bytes --]
Anyone have any ideas how I can salvage this array?
./drae
On Tue, Mar 24, 2020 at 11:09 AM Andrew Falgout <digitalw00t@gmail.com>
wrote:
> The disks are seen, the volume groups are seen. When I try to activate
> the VG I get this:
>
> vgchange -ay vg1
> device-mapper: reload ioctl on (253:19) failed: Input/output error
> 0 logical volume(s) in volume group "vg1" now active
>
> I executed 'vgchange -ay vg1 -vvvv -dddd' and this is the only time an
> error was thrown.
> 20:53:16.552602 vgchange[10795] device_mapper/libdm-deptree.c:2921 Adding
> target to (253:19): 0 31256068096 raid raid5_ls 3 128 region_size 32768 3
> 253:13 253:14 253:15 253:16 253:17 253:18
> 20:53:16.552609 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1853 dm
> table (253:19) [ opencount flush ] [16384] (*1)
> 20:53:16.552619 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1853 dm
> reload (253:19) [ noopencount flush ] [16384] (*1)
> 20:53:16.572481 vgchange[10795] device_mapper/ioctl/libdm-iface.c:1903
> device-mapper: reload ioctl on (253:19) failed: Input/output error
>
> I've uploaded two very verbose and debug ridden.
> https://pastebin.com/bw5eQBa8
> https://pastebin.com/qV5yft05
>
> Ignore the naming. It's not a gluster. I was planning on making two and
> mirroring them in a gluster.
>
> ./drae
>
>
> On Mon, Mar 23, 2020 at 5:14 AM Bernd Eckenfels <ecki@zusammenkunft.net>
> wrote:
>
>> Do you see any dmesg kernel errors when you try to activate the LVs?
>>
>> Gruss
>> Bernd
>>
>>
>> --
>> http://bernd.eckenfels.net
>> ------------------------------
>> *Von:* linux-lvm-bounces@redhat.com <linux-lvm-bounces@redhat.com> im
>> Auftrag von Andrew Falgout <digitalw00t@gmail.com>
>> *Gesendet:* Saturday, March 21, 2020 4:22:04 AM
>> *An:* linux-lvm@redhat.com <linux-lvm@redhat.com>
>> *Betreff:* [linux-lvm] lvm raid5 : drives all present but vg/lvm will
>> not assemble
>>
>>
>> This started on a Raspberry PI 4 running raspbian. I moved the disks to
>> my Fedora 31 system, that is running the latest updates and kernel. When I
>> had the same issues there I knew it wasn't raspbian.
>>
>> I've reached the end of my rope on this. The disks are there, all three
>> are accounted for, and the LVM data on them can be seen. But it refuses to
>> activate stating I/O errors.
>>
>> [root@hypervisor01 ~]# pvs
>> PV VG Fmt Attr PSize PFree
>> /dev/sda1 local_storage01 lvm2 a-- <931.51g 0
>> /dev/sdb1 local_storage01 lvm2 a-- <931.51g 0
>> /dev/sdc1 local_storage01 lvm2 a-- <931.51g 0
>> /dev/sdd1 local_storage01 lvm2 a-- <931.51g 0
>> /dev/sde1 local_storage01 lvm2 a-- <931.51g 0
>> /dev/sdf1 local_storage01 lvm2 a-- <931.51g <931.51g
>> /dev/sdg1 local_storage01 lvm2 a-- <931.51g <931.51g
>> /dev/sdh1 local_storage01 lvm2 a-- <931.51g <931.51g
>> /dev/sdi3 fedora_hypervisor lvm2 a-- 27.33g <9.44g
>> /dev/sdk1 vg1 lvm2 a-- <7.28t 0
>> /dev/sdl1 vg1 lvm2 a-- <7.28t 0
>> /dev/sdm1 vg1 lvm2 a-- <7.28t 0
>> [root@hypervisor01 ~]# vgs
>> VG #PV #LV #SN Attr VSize VFree
>> fedora_hypervisor 1 2 0 wz--n- 27.33g <9.44g
>> local_storage01 8 1 0 wz--n- <7.28t <2.73t
>> vg1 3 1 0 wz--n- 21.83t 0
>> [root@hypervisor01 ~]# lvs
>> LV VG Attr LSize Pool Origin Data% Meta%
>> Move Log Cpy%Sync Convert
>> root fedora_hypervisor -wi-ao---- 15.00g
>>
>> swap fedora_hypervisor -wi-ao---- 2.89g
>>
>> libvirt local_storage01 rwi-aor--- <2.73t
>> 100.00
>> gluster02 vg1 Rwi---r--- 14.55t
>>
>>
>> The one in question is the vg1/gluster02 lvm group.
>>
>> I try to activate the VG:
>> [root@hypervisor01 ~]# vgchange -ay vg1
>> device-mapper: reload ioctl on (253:19) failed: Input/output error
>> 0 logical volume(s) in volume group "vg1" now active
>>
>> I've got the debugging output from :
>> vgchange -ay vg1 -vvvv -dddd
>> lvchange -ay --partial vg1/gluster02 -vvvv -dddd
>>
>> Just not sure where I should dump the data for people to look at. Is
>> there a way to tell the md system to ignore the metadata since there wasn't
>> an actual disk failure, and rebuild the metadata off what is in the lvm?
>> Or can I even get the LV to mount, so I can pull the data off.
>>
>> Any help is appreciated. If I can save the data great. I'm tossing this
>> to the community to see if anyone else has an idea of what I can do.
>> ./digitalw00t
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>
[-- Attachment #2: Type: text/html, Size: 7997 bytes --]
prev parent reply other threads:[~2020-03-31 4:56 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-21 3:22 [linux-lvm] lvm raid5 : drives all present but vg/lvm will not assemble Andrew Falgout
2020-03-23 7:46 ` Bernd Eckenfels
2020-03-23 15:58 ` Roger Heflin
2020-03-24 16:09 ` Andrew Falgout
2020-03-31 4:56 ` Andrew Falgout [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAHRMh5X1L1xtNyLO2t5yDvJvTm1++d4N8M7UhXnPAGYP79t8zA@mail.gmail.com \
--to=digitalw00t@gmail.com \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).