linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Olaf Seibert <o.seibert@syseleven.de>
To: linux-lvm@redhat.com
Subject: [linux-lvm] raid10 with missing redundancy, but health status claims it is ok.
Date: Fri, 27 May 2022 15:56:03 +0200	[thread overview]
Message-ID: <4ea715cc-3a58-bb47-af41-f40e630f93f3@syseleven.de> (raw)

Hi all, I'm new to this list. I hope somebody here can help me.

We had a disk go bad (disk commands timed out and took many seconds to
do so) in our LVM installation with mirroring. With some trouble, we
managed to pvremove the offending disk, and used `lvconvert --repair -y
nova/$lv` to repair (restore redundancy) the logical volumes.

One logical volume still seems to have trouble though. In `lvs -o
devices -a` it shows no devices for 2 of its subvolumes, and it has the
weird 'v' status:


  LV                                                   VG     Attr
 LSize   Pool Origin Data%  Meta%  Move Log         Cpy%Sync Convert Devices
  lvname            nova   Rwi-aor--- 800.00g
                 100.00
lvname_rimage_0(0),lvname_rimage_1(0),lvname_rimage_2(0),lvname_rimage_3(0)
  [lvname_rimage_0] nova   iwi-aor--- 400.00g
                                  /dev/sdc1(19605)
  [lvname_rimage_1] nova   iwi-aor--- 400.00g
                                  /dev/sdi1(19605)
  [lvname_rimage_2] nova   vwi---r--- 400.00g
  [lvname_rimage_3] nova   iwi-aor--- 400.00g
                                  /dev/sdj1(19605)
  [lvname_rmeta_0]  nova   ewi-aor---  64.00m
                                  /dev/sdc1(19604)
  [lvname_rmeta_1]  nova   ewi-aor---  64.00m
                                  /dev/sdi1(19604)
  [lvname_rmeta_2]  nova   ewi---r---  64.00m
  [lvname_rmeta_3]  nova   ewi-aor---  64.00m
                                  /dev/sdj1(19604)

```
and also according to `lvdisplay -am` there is a problem with
`..._rimage2` and `..._rmeta2`:
```
  --- Logical volume ---
  Internal LV Name       lvname_rimage_2
  VG Name                nova
  LV UUID                xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  LV Write Access        read/write
  LV Creation host, time xxxxxxxxx, 2021-07-09 16:45:21 +0000
  LV Status              NOT available
  LV Size                400.00 GiB
  Current LE             6400
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Segments ---
  Virtual extents 0 to 6399:
    Type                error

  --- Logical volume ---
  Internal LV Name       lvname_rmeta_2
  VG Name                nova
  LV UUID                xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  LV Write Access        read/write
  LV Creation host, time xxxxxxxxx, 2021-07-09 16:45:21 +0000
  LV Status              NOT available
  LV Size                64.00 MiB
  Current LE             1
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Segments ---
  Virtual extents 0 to 0:
    Type                error

Similarly, the metadata looks corresponding:

                lvname_rimage_2 {
                        id = "..."
                        status = ["READ", "WRITE"]
                        flags = []
                        creation_time = 1625849121      # 2021-07-09
16:45:21 +0000
                        creation_host = "cbk130133"
                        segment_count = 1

                        segment1 {
                                start_extent = 0
                                extent_count = 6400     # 400 Gigabytes

                                type = "error"
                        }
                }




On the other hand, the health status appears to read out normal:

[13:38:20] root@cbk130133:~# lvs -o +lv_health_status
  LV     VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log
      Cpy%Sync Convert Health
  lvname nova   Rwi-aor--- 800.00g       ..   100.00



We tried various combinations of `lvconvert --repair -y nova/$lv` and
`lvchange --syncaction repair` on it without effect.
`lvchange -ay` doesn't work either:

$ sudo lvchange -ay   nova/lvname_rmeta_2
  Operation not permitted on hidden LV nova/lvname_rmeta_2.
$ sudo lvchange -ay   nova/lvname
$ # (no effect)
$ sudo lvconvert --repair nova/lvname_rimage_2
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  Command on LV nova/lvname_rimage_2 does not accept LV type error.
  Command not permitted on LV nova/lvname_rimage_2.
$ sudo lvchange --resync nova/lvname_rimage_2
  WARNING: Not using lvmetad because a repair command was run.
  Command on LV nova/lvname_rimage_2 does not accept LV type error.
  Command not permitted on LV nova/lvname_rimage_2.
$ sudo lvchange --resync nova/lvname
  WARNING: Not using lvmetad because a repair command was run.
  Logical volume nova/lvname in use.
  Can't resync open logical volume nova/lvname.
$ lvchange --rebuild /dev/sdf1 nova/lvname
  WARNING: Not using lvmetad because a repair command was run.
Do you really want to rebuild 1 PVs of logical volume nova/lvname [y/n]: y
  device-mapper: create ioctl on lvname_rmeta_2 LVM-blah failed: Device
or resource busy
  Failed to lock logical volume nova/lvname.
$ lvchange --raidsyncaction repair nova/lvname
# (took a long time to complete but didn't change anything)
$ sudo lvconvert --mirrors +1 nova/lvname
  Using default stripesize 64.00 KiB.
  --mirrors/-m cannot be changed with raid10.



Any idea how to restore redundancy on this logical volume? It is in
continuous use, of course...
It seems like somehow we must convince LVM to allocate some space for
it, instead of using the error segment (there is plenty available in the
volume group).

Thanks in advance.

-Olaf

-- 
SysEleven GmbH
Boxhagener Straße 80
10245 Berlin

T +49 30 233 2012 0
F +49 30 616 7555 0

http://www.syseleven.de
http://www.facebook.com/SysEleven
https://www.instagram.com/syseleven/

Aktueller System-Status immer unter:
http://www.twitter.com/syseleven

Firmensitz: Berlin
Registergericht: AG Berlin Charlottenburg, HRB 108571 B
Geschäftsführer: Marc Korthaus, Jens Ihlenfeld, Andreas Hermann

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

             reply	other threads:[~2022-05-27 14:06 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-27 13:56 Olaf Seibert [this message]
2022-05-28 16:15 ` John Stoffel
2022-05-30  8:16   ` Olaf Seibert
2022-05-30  8:49     ` Olaf Seibert
2022-06-01 21:58       ` John Stoffel
2022-05-30 14:07     ` Demi Marie Obenour
2022-05-31 11:27       ` Olaf Seibert

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4ea715cc-3a58-bb47-af41-f40e630f93f3@syseleven.de \
    --to=o.seibert@syseleven.de \
    --cc=linux-lvm@redhat.com \
    --subject='Re: [linux-lvm] raid10 with missing redundancy, but health status claims it is ok.' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).