linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Krzysztof Chojnowski <frirajder@gmail.com>
To: linux-lvm@redhat.com
Subject: [linux-lvm] LVM cachepool inconsistency after power event
Date: Mon, 4 Oct 2021 15:30:43 +0200	[thread overview]
Message-ID: <CAPWGHt1MCzPbEHrF-5SRDavHqOfhMSqhkfAUQDMYT=HJ7TRfuQ@mail.gmail.com> (raw)

Hello all,

I'm experimenting with lvmcache, trying to use NVMe disk to speed up
access to rotational disks. I was using the cachepool in a writeback
mode when a power failure occurred, this left the cache in a
inconsistent state. So far I managed to activate the underlying _corig
LV and copy the data from there, but I'm wondering if it's still
possible to repair such cache and if not how do I remove failed LVs to
reclaim the disk space.

This is the relevant portion of the lvs:
  tpg1-wdata                    vg0 Cwi---C--- 500,00g
[wdata_cachepool_cpool] [tpg1-wdata_corig]
  [tpg1-wdata_corig]            vg0 owi---C--- 500,00g
  [wdata_cachepool_cpool]       vg0 Cwi---C---  50,00g
  [wdata_cachepool_cpool_cdata] vg0 Cwi-------  50,00g
  [wdata_cachepool_cpool_cmeta] vg0 ewi-------  40,00m

Trying to activate the tpg1-wdata LV results in an error:
sudo lvchange -ay -v vg0/tpg1-wdata
  Activating logical volume vg0/tpg1-wdata.
  activation/volume_list configuration setting not defined: Checking
only host tags for vg0/tpg1-wdata.
  Creating vg0-wdata_cachepool_cpool_cdata
  Loading table for vg0-wdata_cachepool_cpool_cdata (253:17).
  Resuming vg0-wdata_cachepool_cpool_cdata (253:17).
  Creating vg0-wdata_cachepool_cpool_cmeta
  Loading table for vg0-wdata_cachepool_cpool_cmeta (253:18).
  Resuming vg0-wdata_cachepool_cpool_cmeta (253:18).
  Creating vg0-tpg1--wdata_corig
  Loading table for vg0-tpg1--wdata_corig (253:19).
  Resuming vg0-tpg1--wdata_corig (253:19).
  Executing: /usr/sbin/cache_check -q
/dev/mapper/vg0-wdata_cachepool_cpool_cmeta
  /usr/sbin/cache_check failed: 1
  Piping: /usr/sbin/cache_check -V
  Found version of /usr/sbin/cache_check 0.9.0 is better then requested 0.7.0.
  Check of pool vg0/wdata_cachepool_cpool failed (status:1). Manual
repair required!
  Removing vg0-tpg1--wdata_corig (253:19)
  Removing vg0-wdata_cachepool_cpool_cmeta (253:18)
  Removing vg0-wdata_cachepool_cpool_cdata (253:17)

I tried repairing the volume, but no change:
sudo lvconvert --repair -v vg0/tpg1-wdata
  activation/volume_list configuration setting not defined: Checking
only host tags for vg0/lvol6_pmspare.
  Creating vg0-lvol6_pmspare
  Loading table for vg0-lvol6_pmspare (253:17).
  Resuming vg0-lvol6_pmspare (253:17).
  activation/volume_list configuration setting not defined: Checking
only host tags for vg0/wdata_cachepool_cpool_cmeta.
  Creating vg0-wdata_cachepool_cpool_cmeta
  Loading table for vg0-wdata_cachepool_cpool_cmeta (253:18).
  Resuming vg0-wdata_cachepool_cpool_cmeta (253:18).
  Executing: /usr/sbin/cache_repair  -i
/dev/mapper/vg0-wdata_cachepool_cpool_cmeta -o
/dev/mapper/vg0-lvol6_pmspare
  Removing vg0-wdata_cachepool_cpool_cmeta (253:18)
  Removing vg0-lvol6_pmspare (253:17)
  Preparing pool metadata spare volume for Volume group vg0.
  Archiving volume group "vg0" metadata (seqno 51).
  Creating logical volume lvol7
  Creating volume group backup "/etc/lvm/backup/vg0" (seqno 52).
  Activating logical volume vg0/lvol7.
  activation/volume_list configuration setting not defined: Checking
only host tags for vg0/lvol7.
  Creating vg0-lvol7
  Loading table for vg0-lvol7 (253:17).
  Resuming vg0-lvol7 (253:17).
  Initializing 40,00 MiB of logical volume vg0/lvol7 with value 0.
  Temporary logical volume "lvol7" created.
  Removing vg0-lvol7 (253:17)
  Renaming lvol7 as pool metadata spare volume lvol7_pmspare.
  WARNING: If everything works, remove vg0/tpg1-wdata_meta1 volume.
  WARNING: Use pvmove command to move vg0/wdata_cachepool_cpool_cmeta
on the best fitting PV.

Trying to remove the cache volume also fails:
sudo lvremove -ff vg0/tpg1-wdata
  Check of pool vg0/wdata_cachepool_cpool failed (status:1). Manual
repair required!
  Failed to activate vg0/tpg1-wdata to flush cache.

Any help in resolving this is appreciated!
Thanks,

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


             reply	other threads:[~2021-10-05  5:26 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-04 13:30 Krzysztof Chojnowski [this message]
2021-10-05 10:34 ` Ming-Hung Tsai
2021-10-05 11:34 ` Krzysztof Chojnowski
2021-10-05 15:51   ` Zdenek Kabelac
2021-10-05 16:56     ` Ming Hung Tsai
2021-10-06  8:27       ` Krzysztof Chojnowski
2021-10-08  7:56         ` Ming Hung Tsai
2021-10-11 12:16           ` Krzysztof Chojnowski
2021-10-11 16:29             ` Ming Hung Tsai
2021-10-11 18:57               ` Krzysztof Chojnowski
2021-10-05 11:13 Krzysztof Chojnowski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPWGHt1MCzPbEHrF-5SRDavHqOfhMSqhkfAUQDMYT=HJ7TRfuQ@mail.gmail.com' \
    --to=frirajder@gmail.com \
    --cc=linux-lvm@redhat.com \
    --subject='Re: [linux-lvm] LVM cachepool inconsistency after power event' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).