linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] thin: pool target too small
@ 2020-09-20 23:48 Duncan Townsend
  2020-09-21  9:23 ` Zdenek Kabelac
  0 siblings, 1 reply; 14+ messages in thread
From: Duncan Townsend @ 2020-09-20 23:48 UTC (permalink / raw)
  To: linux-lvm

Hello!

I think the problem I'm having is a related problem to this thread:
https://www.redhat.com/archives/linux-lvm/2016-May/msg00092.html
continuation https://www.redhat.com/archives/linux-lvm/2016-June/msg00000.html
. In the previous thread, Zdenek Kabelac fixed the problem manually,
but there was no information about exactly what or how the problem was
fixed. I have also posted about this problem on the #lvm on freenode
and on Stack Exchange
(https://superuser.com/questions/1587224/lvm2-thin-pool-pool-target-too-small),
so my apologies to those of you who are seeing this again.

I had a problem with a runit script that caused my dmeventd to be
killed and restarted every 5 seconds. The script has been fixed, but
my lvm thin pool is still un-mountable. The following is an excerpt
from my system logs when the problem first appeared

device-mapper: thin: 253:10: reached low water mark for data device:
sending event.
lvm[1221]: WARNING: Sum of all thin volume sizes (2.81 TiB) exceeds
the size of thin pools and the size of whole volume group (1.86 TiB).
lvm[1221]: Size of logical volume
nellodee-nvme/nellodee-nvme-thin_tdata changed from 212.64 GiB (13609
extents) to <233.91 GiB (14970 extents).
device-mapper: thin: 253:10: growing the data device from 13609 to 14970 blocks
lvm[1221]: Logical volume nellodee-nvme/nellodee-nvme-thin_tdata
successfully resized.
lvm[1221]: dmeventd received break, scheduling exit.
lvm[1221]: dmeventd received break, scheduling exit.
lvm[1221]: WARNING: Thin pool
nellodee--nvme-nellodee--nvme--thin-tpool data is now 81.88% full.
<SNIP> (lots of repeats of "lvm[1221]: dmeventd received break,
scheduling exit.")
lvm[1221]: No longer monitoring thin pool
nellodee--nvme-nellodee--nvme--thin-tpool.
device-mapper: thin: 253:10: pool target (13609 blocks) too small:
expected 14970
device-mapper: table: 253:10: thin-pool: preresume failed, error = -22
lvm[1221]: dmeventd received break, scheduling exit.
(previous message repeats many times)

After this, the system became unresponsive, so I power cycled it. Upon
boot up, the following message was printed and I was dropped into an
emergency shell:

device-mapper: thin: 253:10: pool target (13609 blocks) too small:
expected 14970
device-mapper: table: 253:10: thin-pool: preresume failed, error = -22

I have tried using thin_repair, which reported success and didn't
solve the problem. I tried vgcfgrestore (using metadata backups going
back quite a ways), which also reported success and did not solve the
problem. I tried lvchange --repair. I tried lvextending the thin
volume, which reported "Cannot resize logical volume
nellodee-nvme/nellodee-nvme-thin with active component LV(s)". I tried
lvextending the underlying *_tdata LV, which reported "Can't resize
internal logical volume nellodee-nvme/nellodee-nvme-thin_tdata". I
have the LVM header, if that would be of interest to anyone.

I am at a loss here about how to proceed with fixing this problem. Is
there some flag I've missed or some tool I don't know about that I can
apply to fixing this problem? Thank you very much for your attention,
--Duncan Townsend

P.S. I cannot restore from backup. Ironically/amusingly, this happened
right in the middle of me bringing my new backup system online.

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2020-10-09 21:21 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-20 23:48 [linux-lvm] thin: pool target too small Duncan Townsend
2020-09-21  9:23 ` Zdenek Kabelac
2020-09-21 13:47   ` Duncan Townsend
2020-09-22 22:02     ` Zdenek Kabelac
2020-09-23 18:13       ` Duncan Townsend
2020-09-23 18:49         ` Zdenek Kabelac
2020-09-23 19:54           ` Duncan Townsend
2020-09-24 17:54             ` Zdenek Kabelac
2020-09-26 13:30               ` Duncan Townsend
2020-09-29 14:33                 ` Duncan Townsend
2020-09-29 15:53                   ` Zdenek Kabelac
2020-09-30 18:00                     ` Duncan Townsend
2020-10-02 13:05                       ` Duncan Townsend
2020-10-09 21:15                         ` Duncan Townsend

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).