* [linux-lvm] metadata device too small
@ 2020-01-11 17:57 Ede Wolf
2020-01-11 22:00 ` Ede Wolf
2020-01-12 18:11 ` Zdenek Kabelac
0 siblings, 2 replies; 11+ messages in thread
From: Ede Wolf @ 2020-01-11 17:57 UTC (permalink / raw)
To: LVM general discussion and development
After having swapped a 2,2T thinpool metadata device for a 16GB one,
I've run into a transaction id mismatch. So run lconvert --repair on the
thinvolume - in fact, I've had to run the repair twice, as the
transaction id error persisted after the first run.
Now ever since I cannot activate the thinpool any more:
[root]# lvchange -ay VG_Raid6/ThinPoolRaid6
WARNING: Not using lvmetad because a repair command was run.
Activation of logical volume VG_Raid6/ThinPoolRaid6 is prohibited
while logical volume VG_Raid6/ThinPoolRaid6_tmeta is active.
So disable them and try again:
[root]# lvchange -an VG_Raid6/ThinPoolRaid6_tdata
WARNING: Not using lvmetad because a repair command was run.
[root]# lvchange -an VG_Raid6/ThinPoolRaid6_tmeta
WARNING: Not using lvmetad because a repair command was run.
[root]# lvchange -ay VG_Raid6/ThinPoolRaid6
WARNING: Not using lvmetad because a repair command was run.
device-mapper: resume ioctl on (253:3) failed: Invalid argument
Unable to resume VG_Raid6-ThinPoolRaid6-tpool (253:3).
And from the journal:
kernel: device-mapper: thin: 253:3: metadata device (4145152 blocks) too
small: expected 4161600
kernel: device-mapper: table: 253:3: thin-pool: preresume failed, error
= -22
Despite not using ubuntu, I may have been bitten by this bug(?), as my
new metadata partion happens to be 16GB:
"If pool meta is 16GB , lvconvert --repair will destroy logical volumes."
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1625201
Is there any way to make the data accessible again?
lvm2 2.02.186
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] metadata device too small
2020-01-11 17:57 [linux-lvm] metadata device too small Ede Wolf
@ 2020-01-11 22:00 ` Ede Wolf
2020-01-11 22:07 ` Ede Wolf
2020-01-12 18:11 ` Zdenek Kabelac
1 sibling, 1 reply; 11+ messages in thread
From: Ede Wolf @ 2020-01-11 22:00 UTC (permalink / raw)
To: linux-lvm
So I reverted (swapped) to the _meta0 backup, that had been created by
--repair, that brought me back to the transaction error, then I did a
vgcfgbackup and changed the transaction id to what lvm was expecting and
restored it, and, wohoo, the thinpool can be activated again.
However, when trying to activate an actual volume within that thinpool:
# lvchange -ay VG_Raid6/data
device-mapper: reload ioctl on (253:8) failed: No data available
And that message holds true for all lv of that thinpool.
Am 11.01.20 um 18:57 schrieb Ede Wolf:
> After having swapped a 2,2T thinpool metadata device for a 16GB one,
> I've run into a transaction id mismatch. So run lconvert --repair on the
> thinvolume - in fact, I've had to run the repair twice, as the
> transaction id error persisted after the first run.
>
> Now ever since I cannot activate the thinpool any more:
>
> [root]# lvchange -ay VG_Raid6/ThinPoolRaid6
> WARNING: Not using lvmetad because a repair command was run.
> Activation of logical volume VG_Raid6/ThinPoolRaid6 is prohibited
> while logical volume VG_Raid6/ThinPoolRaid6_tmeta is active.
>
> So disable them and try again:
>
> [root]# lvchange -an VG_Raid6/ThinPoolRaid6_tdata
> WARNING: Not using lvmetad because a repair command was run.
>
> [root]# lvchange -an VG_Raid6/ThinPoolRaid6_tmeta
> WARNING: Not using lvmetad because a repair command was run.
>
> [root]# lvchange -ay VG_Raid6/ThinPoolRaid6
> WARNING: Not using lvmetad because a repair command was run.
> device-mapper: resume ioctl on (253:3) failed: Invalid argument
> Unable to resume VG_Raid6-ThinPoolRaid6-tpool (253:3).
>
> And from the journal:
>
> kernel: device-mapper: thin: 253:3: metadata device (4145152 blocks) too
> small: expected 4161600
> kernel: device-mapper: table: 253:3: thin-pool: preresume failed, error
> = -22
>
>
> Despite not using ubuntu, I may have been bitten by this bug(?), as my
> new metadata partion happens to be 16GB:
>
> "If pool meta is 16GB , lvconvert --repair will destroy logical volumes."
>
> https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1625201
>
> Is there any way to make the data accessible again?
>
> lvm2 2.02.186
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] metadata device too small
2020-01-11 22:00 ` Ede Wolf
@ 2020-01-11 22:07 ` Ede Wolf
2020-01-13 15:02 ` Marian Csontos
0 siblings, 1 reply; 11+ messages in thread
From: Ede Wolf @ 2020-01-11 22:07 UTC (permalink / raw)
To: linux-lvm
Forgot to add the journal output, though I do not think this raises chances:
kernel: device-mapper: table: 253:8: thin: Couldn't open thin internal
device
kernel: device-mapper: ioctl: error adding target to table
Am 11.01.20 um 23:00 schrieb Ede Wolf:
> So I reverted (swapped) to the _meta0 backup, that had been created by
> --repair, that brought me back to the transaction error, then I did a
> vgcfgbackup and changed the transaction id to what lvm was expecting and
> restored it, and, wohoo, the thinpool can be activated again.
>
> However, when trying to activate an actual volume within that thinpool:
>
> # lvchange -ay VG_Raid6/data
> device-mapper: reload ioctl on (253:8) failed: No data available
>
> And that message holds true for all lv of that thinpool.
>
>
> Am 11.01.20 um 18:57 schrieb Ede Wolf:
>> After having swapped a 2,2T thinpool metadata device for a 16GB one,
>> I've run into a transaction id mismatch. So run lconvert --repair on
>> the thinvolume - in fact, I've had to run the repair twice, as the
>> transaction id error persisted after the first run.
>>
>> Now ever since I cannot activate the thinpool any more:
>>
>> [root]# lvchange -ay VG_Raid6/ThinPoolRaid6
>> WARNING: Not using lvmetad because a repair command was run.
>> Activation of logical volume VG_Raid6/ThinPoolRaid6 is prohibited
>> while logical volume VG_Raid6/ThinPoolRaid6_tmeta is active.
>>
>> So disable them and try again:
>>
>> [root]# lvchange -an VG_Raid6/ThinPoolRaid6_tdata
>> WARNING: Not using lvmetad because a repair command was run.
>>
>> [root]# lvchange -an VG_Raid6/ThinPoolRaid6_tmeta
>> WARNING: Not using lvmetad because a repair command was run.
>>
>> [root]# lvchange -ay VG_Raid6/ThinPoolRaid6
>> WARNING: Not using lvmetad because a repair command was run.
>> device-mapper: resume ioctl on (253:3) failed: Invalid argument
>> Unable to resume VG_Raid6-ThinPoolRaid6-tpool (253:3).
>>
>> And from the journal:
>>
>> kernel: device-mapper: thin: 253:3: metadata device (4145152 blocks)
>> too small: expected 4161600
>> kernel: device-mapper: table: 253:3: thin-pool: preresume failed,
>> error = -22
>>
>>
>> Despite not using ubuntu, I may have been bitten by this bug(?), as my
>> new metadata partion happens to be 16GB:
>>
>> "If pool meta is 16GB , lvconvert --repair will destroy logical volumes."
>>
>> https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1625201
>>
>> Is there any way to make the data accessible again?
>>
>> lvm2 2.02.186
>>
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] metadata device too small
2020-01-11 17:57 [linux-lvm] metadata device too small Ede Wolf
2020-01-11 22:00 ` Ede Wolf
@ 2020-01-12 18:11 ` Zdenek Kabelac
2020-01-13 14:32 ` Gionatan Danti
1 sibling, 1 reply; 11+ messages in thread
From: Zdenek Kabelac @ 2020-01-12 18:11 UTC (permalink / raw)
To: listac, LVM general discussion and development
Dne 11. 01. 20 v 18:57 Ede Wolf napsal(a):
> After having swapped a 2,2T thinpool metadata device for a 16GB one, I've run
Hi
There was a good reason I've specified in my email to use the value 15G.
With 16G there is 'problem' (not yet resolved known issue) with different max
size used by thin_repair (15.875G) & lvm2 (15.8125G) tools.
If you want to go with current max size supported by lvm2 - use the value
-L16192M.
Regards
Zdenek
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] metadata device too small
2020-01-12 18:11 ` Zdenek Kabelac
@ 2020-01-13 14:32 ` Gionatan Danti
2020-01-13 14:49 ` Zdenek Kabelac
0 siblings, 1 reply; 11+ messages in thread
From: Gionatan Danti @ 2020-01-13 14:32 UTC (permalink / raw)
To: LVM general discussion and development, Zdenek Kabelac, listac
On 12/01/20 19:11, Zdenek Kabelac wrote:
> With 16G there is 'problem' (not yet resolved known issue) with
> different max size used by thin_repair (15.875G) & lvm2 (15.8125G) tools.
>
> If you want to go with current max size supported by lvm2 - use the
> value -L16192M.
Hi Zdenek,
just for confirmation: so using a 16 GiB thin metadata volume *will*
result in activation problems? For example, a
lvcreate --thin system --name thinpool -L 100G --poolmetadatasize 16G
will be affected by the problem you wrote above?
Finally, does it means that lvmthin man page is wrong when stating that
"Thin pool metadata LV sizes can be from 2MiB to 16GiB" (note the GiB
suffix rather than GB)?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] metadata device too small
2020-01-13 14:32 ` Gionatan Danti
@ 2020-01-13 14:49 ` Zdenek Kabelac
2020-01-13 15:25 ` Gionatan Danti
0 siblings, 1 reply; 11+ messages in thread
From: Zdenek Kabelac @ 2020-01-13 14:49 UTC (permalink / raw)
To: Gionatan Danti, LVM general discussion and development, listac
Dne 13. 01. 20 v 15:32 Gionatan Danti napsal(a):
> On 12/01/20 19:11, Zdenek Kabelac wrote:
>> With 16G there is 'problem' (not yet resolved known issue) with different
>> max size used by thin_repair (15.875G) & lvm2 (15.8125G) tools.
>>
>> If you want to go with current max size supported by lvm2 - use the value
>> -L16192M.
>
> Hi Zdenek,
> just for confirmation: so using a 16 GiB thin metadata volume *will* result in
> activation problems? For example, a
>
> lvcreate --thin system --name thinpool -L 100G --poolmetadatasize 16G
>
> will be affected by the problem you wrote above?
>
> Finally, does it means that lvmthin man page is wrong when stating that "Thin
> pool metadata LV sizes can be from 2MiB to 16GiB" (note the GiB suffix rather
> than GB)?
>
Hi
Well the size is 'almost' 16GiB - and when the size of thin-pools metadata is
always maintained by lvm2 - it's OK - the size is internally 'clamped'
correctly - the problem is when you use this size 'externally' - so you make
16GiB regular LV used for thin-repair - and then you swap-in such LV into
thin-pool.
So to make it clear - when you 'lvcreate' thin-pool with 16GiB of metadata -
it will work - but then when you will try to fix such thin-pool - it will
fail. So it's always better to create thin-pool with -L15.812G then using 16G.
Regards
Zdenek
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] metadata device too small
2020-01-13 14:49 ` Zdenek Kabelac
@ 2020-01-13 15:25 ` Gionatan Danti
0 siblings, 0 replies; 11+ messages in thread
From: Gionatan Danti @ 2020-01-13 15:25 UTC (permalink / raw)
To: Zdenek Kabelac, LVM general discussion and development, listac
On 13/01/20 15:49, Zdenek Kabelac wrote:
> Hi
>
> Well the size is 'almost' 16GiB - and when the size of thin-pools
> metadata is always maintained by lvm2 - it's OK - the size is
> internally 'clamped' correctly - the problem is when you use this size
> 'externally' - so you make 16GiB regular LV used for thin-repair - and
> then you swap-in such LV into thin-pool.
>
> So to make it clear - when you 'lvcreate' thin-pool with 16GiB of
> metadata - it will work - but then when you will try to fix such
> thin-pool - it will fail. So it's always better to create thin-pool
> with -L15.812G then using 16G.
Understood, thank you so much.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2020-01-13 21:29 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-11 17:57 [linux-lvm] metadata device too small Ede Wolf
2020-01-11 22:00 ` Ede Wolf
2020-01-11 22:07 ` Ede Wolf
2020-01-13 15:02 ` Marian Csontos
2020-01-13 16:35 ` Ede Wolf
2020-01-13 19:11 ` Zdenek Kabelac
[not found] ` <74436e16-d2f6-71a0-c264-71ce417de08c@nebelschwaden.de>
2020-01-13 21:29 ` Ede Wolf
2020-01-12 18:11 ` Zdenek Kabelac
2020-01-13 14:32 ` Gionatan Danti
2020-01-13 14:49 ` Zdenek Kabelac
2020-01-13 15:25 ` Gionatan Danti
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).