linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] metadata device too small
@ 2020-01-11 17:57 Ede Wolf
  2020-01-11 22:00 ` Ede Wolf
  2020-01-12 18:11 ` Zdenek Kabelac
  0 siblings, 2 replies; 11+ messages in thread
From: Ede Wolf @ 2020-01-11 17:57 UTC (permalink / raw)
  To: LVM general discussion and development

After having swapped a 2,2T thinpool metadata device for a 16GB one, 
I've run into a transaction id mismatch. So run lconvert --repair on the 
thinvolume - in fact, I've had to run the repair twice, as the 
transaction id error persisted after the first run.

Now ever since I cannot activate the thinpool any more:

[root]# lvchange -ay VG_Raid6/ThinPoolRaid6
   WARNING: Not using lvmetad because a repair command was run.
   Activation of logical volume VG_Raid6/ThinPoolRaid6 is prohibited 
while logical volume VG_Raid6/ThinPoolRaid6_tmeta is active.

So disable them and try again:

[root]# lvchange -an VG_Raid6/ThinPoolRaid6_tdata
   WARNING: Not using lvmetad because a repair command was run.

[root]# lvchange -an VG_Raid6/ThinPoolRaid6_tmeta
   WARNING: Not using lvmetad because a repair command was run.

[root]# lvchange -ay VG_Raid6/ThinPoolRaid6
   WARNING: Not using lvmetad because a repair command was run.
   device-mapper: resume ioctl on  (253:3) failed: Invalid argument
   Unable to resume VG_Raid6-ThinPoolRaid6-tpool (253:3).

And from the journal:

kernel: device-mapper: thin: 253:3: metadata device (4145152 blocks) too 
small: expected 4161600
kernel: device-mapper: table: 253:3: thin-pool: preresume failed, error 
= -22


Despite not using ubuntu, I may have been bitten by this bug(?), as my 
new metadata partion happens to be 16GB:

"If pool meta is 16GB , lvconvert --repair will destroy logical volumes."

https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1625201

Is there any way to make the data accessible again?

lvm2 2.02.186

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [linux-lvm] metadata device too small
  2020-01-11 17:57 [linux-lvm] metadata device too small Ede Wolf
@ 2020-01-11 22:00 ` Ede Wolf
  2020-01-11 22:07   ` Ede Wolf
  2020-01-12 18:11 ` Zdenek Kabelac
  1 sibling, 1 reply; 11+ messages in thread
From: Ede Wolf @ 2020-01-11 22:00 UTC (permalink / raw)
  To: linux-lvm

So I reverted (swapped) to the _meta0 backup, that had been created by 
--repair, that brought me back to the transaction error, then I did a 
vgcfgbackup and changed the transaction id to what lvm was expecting and 
restored it, and,  wohoo, the thinpool can be activated again.

However, when trying to activate an actual volume within that thinpool:

# lvchange -ay VG_Raid6/data
   device-mapper: reload ioctl on  (253:8) failed: No data available

And that message holds true for all lv of that thinpool.


Am 11.01.20 um 18:57 schrieb Ede Wolf:
> After having swapped a 2,2T thinpool metadata device for a 16GB one, 
> I've run into a transaction id mismatch. So run lconvert --repair on the 
> thinvolume - in fact, I've had to run the repair twice, as the 
> transaction id error persisted after the first run.
> 
> Now ever since I cannot activate the thinpool any more:
> 
> [root]# lvchange -ay VG_Raid6/ThinPoolRaid6
>    WARNING: Not using lvmetad because a repair command was run.
>    Activation of logical volume VG_Raid6/ThinPoolRaid6 is prohibited 
> while logical volume VG_Raid6/ThinPoolRaid6_tmeta is active.
> 
> So disable them and try again:
> 
> [root]# lvchange -an VG_Raid6/ThinPoolRaid6_tdata
>    WARNING: Not using lvmetad because a repair command was run.
> 
> [root]# lvchange -an VG_Raid6/ThinPoolRaid6_tmeta
>    WARNING: Not using lvmetad because a repair command was run.
> 
> [root]# lvchange -ay VG_Raid6/ThinPoolRaid6
>    WARNING: Not using lvmetad because a repair command was run.
>    device-mapper: resume ioctl on  (253:3) failed: Invalid argument
>    Unable to resume VG_Raid6-ThinPoolRaid6-tpool (253:3).
> 
> And from the journal:
> 
> kernel: device-mapper: thin: 253:3: metadata device (4145152 blocks) too 
> small: expected 4161600
> kernel: device-mapper: table: 253:3: thin-pool: preresume failed, error 
> = -22
> 
> 
> Despite not using ubuntu, I may have been bitten by this bug(?), as my 
> new metadata partion happens to be 16GB:
> 
> "If pool meta is 16GB , lvconvert --repair will destroy logical volumes."
> 
> https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1625201
> 
> Is there any way to make the data accessible again?
> 
> lvm2 2.02.186
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [linux-lvm] metadata device too small
  2020-01-11 22:00 ` Ede Wolf
@ 2020-01-11 22:07   ` Ede Wolf
  2020-01-13 15:02     ` Marian Csontos
  0 siblings, 1 reply; 11+ messages in thread
From: Ede Wolf @ 2020-01-11 22:07 UTC (permalink / raw)
  To: linux-lvm

Forgot to add the journal output, though I do not think this raises chances:

kernel: device-mapper: table: 253:8: thin: Couldn't open thin internal 
device
kernel: device-mapper: ioctl: error adding target to table



Am 11.01.20 um 23:00 schrieb Ede Wolf:
> So I reverted (swapped) to the _meta0 backup, that had been created by 
> --repair, that brought me back to the transaction error, then I did a 
> vgcfgbackup and changed the transaction id to what lvm was expecting and 
> restored it, and,  wohoo, the thinpool can be activated again.
> 
> However, when trying to activate an actual volume within that thinpool:
> 
> # lvchange -ay VG_Raid6/data
>    device-mapper: reload ioctl on  (253:8) failed: No data available
> 
> And that message holds true for all lv of that thinpool.
> 
> 
> Am 11.01.20 um 18:57 schrieb Ede Wolf:
>> After having swapped a 2,2T thinpool metadata device for a 16GB one, 
>> I've run into a transaction id mismatch. So run lconvert --repair on 
>> the thinvolume - in fact, I've had to run the repair twice, as the 
>> transaction id error persisted after the first run.
>>
>> Now ever since I cannot activate the thinpool any more:
>>
>> [root]# lvchange -ay VG_Raid6/ThinPoolRaid6
>>    WARNING: Not using lvmetad because a repair command was run.
>>    Activation of logical volume VG_Raid6/ThinPoolRaid6 is prohibited 
>> while logical volume VG_Raid6/ThinPoolRaid6_tmeta is active.
>>
>> So disable them and try again:
>>
>> [root]# lvchange -an VG_Raid6/ThinPoolRaid6_tdata
>>    WARNING: Not using lvmetad because a repair command was run.
>>
>> [root]# lvchange -an VG_Raid6/ThinPoolRaid6_tmeta
>>    WARNING: Not using lvmetad because a repair command was run.
>>
>> [root]# lvchange -ay VG_Raid6/ThinPoolRaid6
>>    WARNING: Not using lvmetad because a repair command was run.
>>    device-mapper: resume ioctl on  (253:3) failed: Invalid argument
>>    Unable to resume VG_Raid6-ThinPoolRaid6-tpool (253:3).
>>
>> And from the journal:
>>
>> kernel: device-mapper: thin: 253:3: metadata device (4145152 blocks) 
>> too small: expected 4161600
>> kernel: device-mapper: table: 253:3: thin-pool: preresume failed, 
>> error = -22
>>
>>
>> Despite not using ubuntu, I may have been bitten by this bug(?), as my 
>> new metadata partion happens to be 16GB:
>>
>> "If pool meta is 16GB , lvconvert --repair will destroy logical volumes."
>>
>> https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1625201
>>
>> Is there any way to make the data accessible again?
>>
>> lvm2 2.02.186
>>
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [linux-lvm] metadata device too small
  2020-01-11 17:57 [linux-lvm] metadata device too small Ede Wolf
  2020-01-11 22:00 ` Ede Wolf
@ 2020-01-12 18:11 ` Zdenek Kabelac
  2020-01-13 14:32   ` Gionatan Danti
  1 sibling, 1 reply; 11+ messages in thread
From: Zdenek Kabelac @ 2020-01-12 18:11 UTC (permalink / raw)
  To: listac, LVM general discussion and development

Dne 11. 01. 20 v 18:57 Ede Wolf napsal(a):
> After having swapped a 2,2T thinpool metadata device for a 16GB one, I've run

Hi

There was a good reason I've specified in my email to use the value 15G.

With 16G there is 'problem' (not yet resolved known issue) with different max 
size used by thin_repair (15.875G) & lvm2 (15.8125G) tools.

If you want to go with current max size supported by lvm2 - use the value 
-L16192M.


Regards

Zdenek

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [linux-lvm] metadata device too small
  2020-01-12 18:11 ` Zdenek Kabelac
@ 2020-01-13 14:32   ` Gionatan Danti
  2020-01-13 14:49     ` Zdenek Kabelac
  0 siblings, 1 reply; 11+ messages in thread
From: Gionatan Danti @ 2020-01-13 14:32 UTC (permalink / raw)
  To: LVM general discussion and development, Zdenek Kabelac, listac

On 12/01/20 19:11, Zdenek Kabelac wrote:
> With 16G there is 'problem' (not yet resolved known issue) with 
> different max size used by thin_repair (15.875G) & lvm2 (15.8125G) tools.
> 
> If you want to go with current max size supported by lvm2 - use the 
> value -L16192M.

Hi Zdenek,
just for confirmation: so using a 16 GiB thin metadata volume *will* 
result in activation problems? For example, a

lvcreate --thin system --name thinpool -L 100G --poolmetadatasize 16G

will be affected by the problem you wrote above?

Finally, does it means that lvmthin man page is wrong when stating that 
"Thin pool metadata LV sizes can be from 2MiB to 16GiB" (note the GiB 
suffix rather than GB)?

Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [linux-lvm] metadata device too small
  2020-01-13 14:32   ` Gionatan Danti
@ 2020-01-13 14:49     ` Zdenek Kabelac
  2020-01-13 15:25       ` Gionatan Danti
  0 siblings, 1 reply; 11+ messages in thread
From: Zdenek Kabelac @ 2020-01-13 14:49 UTC (permalink / raw)
  To: Gionatan Danti, LVM general discussion and development, listac

Dne 13. 01. 20 v 15:32 Gionatan Danti napsal(a):
> On 12/01/20 19:11, Zdenek Kabelac wrote:
>> With 16G there is 'problem' (not yet resolved known issue) with different 
>> max size used by thin_repair (15.875G) & lvm2 (15.8125G) tools.
>>
>> If you want to go with current max size supported by lvm2 - use the value 
>> -L16192M.
> 
> Hi Zdenek,
> just for confirmation: so using a 16 GiB thin metadata volume *will* result in 
> activation problems? For example, a
> 
> lvcreate --thin system --name thinpool -L 100G --poolmetadatasize 16G
> 
> will be affected by the problem you wrote above?
> 
> Finally, does it means that lvmthin man page is wrong when stating that "Thin 
> pool metadata LV sizes can be from 2MiB to 16GiB" (note the GiB suffix rather 
> than GB)?
> 

Hi

Well the size is 'almost' 16GiB - and when the size of thin-pools metadata is 
always maintained by lvm2 - it's OK -  the size is internally 'clamped' 
correctly - the problem is when you use this size 'externally' - so you make 
16GiB regular LV used for thin-repair - and then you swap-in such LV into 
thin-pool.

So to make it clear - when you 'lvcreate' thin-pool with 16GiB of metadata - 
it will work - but then when you will try to fix such thin-pool - it will 
fail.  So it's always better to create thin-pool with  -L15.812G then using  16G.

Regards

Zdenek

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [linux-lvm] metadata device too small
  2020-01-11 22:07   ` Ede Wolf
@ 2020-01-13 15:02     ` Marian Csontos
  2020-01-13 16:35       ` Ede Wolf
  0 siblings, 1 reply; 11+ messages in thread
From: Marian Csontos @ 2020-01-13 15:02 UTC (permalink / raw)
  To: listac, LVM general discussion and development

On 1/11/20 11:07 PM, Ede Wolf wrote:

>>> Is there any way to make the data accessible again?
>>>
>>> lvm2 2.02.186

lvm2 version is not that important in this case, you will want to try a 
newer thin-provisioning-tools package.

I see the newest version in Ubuntu is 0.7.6 while upstream is at 0.8.5 
with some bugs in thin metadata repair fixed.

You can compile yourself, or could try using e.g. live Fedora 31 with up 
to date package.

-- Marian


>>>
>>>
>>> _______________________________________________
>>> linux-lvm mailing list
>>> linux-lvm@redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>>
>>
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [linux-lvm] metadata device too small
  2020-01-13 14:49     ` Zdenek Kabelac
@ 2020-01-13 15:25       ` Gionatan Danti
  0 siblings, 0 replies; 11+ messages in thread
From: Gionatan Danti @ 2020-01-13 15:25 UTC (permalink / raw)
  To: Zdenek Kabelac, LVM general discussion and development, listac

On 13/01/20 15:49, Zdenek Kabelac wrote:
> Hi
> 
> Well the size is 'almost' 16GiB - and when the size of thin-pools 
> metadata is always maintained by lvm2 - it's OK -  the size is 
> internally 'clamped' correctly - the problem is when you use this size 
> 'externally' - so you make 16GiB regular LV used for thin-repair - and 
> then you swap-in such LV into thin-pool.
> 
> So to make it clear - when you 'lvcreate' thin-pool with 16GiB of 
> metadata - it will work - but then when you will try to fix such 
> thin-pool - it will fail.  So it's always better to create thin-pool 
> with  -L15.812G then using  16G.

Understood, thank you so much.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [linux-lvm] metadata device too small
  2020-01-13 15:02     ` Marian Csontos
@ 2020-01-13 16:35       ` Ede Wolf
  2020-01-13 19:11         ` Zdenek Kabelac
  0 siblings, 1 reply; 11+ messages in thread
From: Ede Wolf @ 2020-01-13 16:35 UTC (permalink / raw)
  To: linux-lvm

Hello Marian,

thanks for that advice, but my  thin-provisioning-tools already  are at 
version 0.8.5

Basically I consider the data now lost.

Am 13.01.20 um 16:02 schrieb Marian Csontos:
> On 1/11/20 11:07 PM, Ede Wolf wrote:
> 
>>>> Is there any way to make the data accessible again?
>>>>
>>>> lvm2 2.02.186
> 
> lvm2 version is not that important in this case, you will want to try a 
> newer thin-provisioning-tools package.
> 
> I see the newest version in Ubuntu is 0.7.6 while upstream is at 0.8.5 
> with some bugs in thin metadata repair fixed.
> 
> You can compile yourself, or could try using e.g. live Fedora 31 with up 
> to date package.
> 
> -- Marian
> 
> 
>>>>
>>>>
>>>> _______________________________________________
>>>> linux-lvm mailing list
>>>> linux-lvm@redhat.com
>>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>>>
>>>
>>>
>>> _______________________________________________
>>> linux-lvm mailing list
>>> linux-lvm@redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [linux-lvm] metadata device too small
  2020-01-13 16:35       ` Ede Wolf
@ 2020-01-13 19:11         ` Zdenek Kabelac
       [not found]           ` <74436e16-d2f6-71a0-c264-71ce417de08c@nebelschwaden.de>
  0 siblings, 1 reply; 11+ messages in thread
From: Zdenek Kabelac @ 2020-01-13 19:11 UTC (permalink / raw)
  To: listac, LVM general discussion and development

Dne 13. 01. 20 v 17:35 Ede Wolf napsal(a):
> Hello Marian,
> 
> thanks for that advice, but my� thin-provisioning-tools already� are at 
> version 0.8.5
> 
> Basically I consider the data now lost.
> 

Hi

Why do you think they are lost ?

Don't you have any 'valid' LV with metadata ?

You can use your 'too big' metadava LV and 'thin_repair' it to appropritely 
sized one -  it should work easily.

Regards

Zdenek

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [linux-lvm] metadata device too small
       [not found]           ` <74436e16-d2f6-71a0-c264-71ce417de08c@nebelschwaden.de>
@ 2020-01-13 21:29             ` Ede Wolf
  0 siblings, 0 replies; 11+ messages in thread
From: Ede Wolf @ 2020-01-13 21:29 UTC (permalink / raw)
  To: linux-lvm

Sorry, did by mistake answer privately, not to the list.

Am 13.01.20 um 22:27 schrieb Ede Wolf:
> Hello,
> 
> unfortunately, as everything looked all right after the swap, I've 
> deleted the too big metadata pool.
> Should have tried to actually mount the LVs, but as they where online 
> and not errors reported, I proceeded.
> 
> Just while trying to rezise (where the big pool had to go, as otherwise 
> I would not have had the space to resize) I ran across the transaction 
> ID error. That even did not look fatal, until I noted, nothing has 
> happened.
> 
> So now I have one pool with the transaction ID error, that upon after 
> manual fixing claims no data available, and two "repaired" versions of 
> that metadatapool.
> 
> Ede
> 
> 
> Am 13.01.20 um 20:11 schrieb Zdenek Kabelac:
>> Dne 13. 01. 20 v 17:35 Ede Wolf napsal(a):
>>> Hello Marian,
>>>
>>> thanks for that advice, but my� thin-provisioning-tools already� are 
>>> at version 0.8.5
>>>
>>> Basically I consider the data now lost.
>>>
>>
>> Hi
>>
>> Why do you think they are lost ?
>>
>> Don't you have any 'valid' LV with metadata ?
>>
>> You can use your 'too big' metadava LV and 'thin_repair' it to 
>> appropritely sized one -� it should work easily.
>>
>> Regards
>>
>> Zdenek
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-01-13 21:29 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-11 17:57 [linux-lvm] metadata device too small Ede Wolf
2020-01-11 22:00 ` Ede Wolf
2020-01-11 22:07   ` Ede Wolf
2020-01-13 15:02     ` Marian Csontos
2020-01-13 16:35       ` Ede Wolf
2020-01-13 19:11         ` Zdenek Kabelac
     [not found]           ` <74436e16-d2f6-71a0-c264-71ce417de08c@nebelschwaden.de>
2020-01-13 21:29             ` Ede Wolf
2020-01-12 18:11 ` Zdenek Kabelac
2020-01-13 14:32   ` Gionatan Danti
2020-01-13 14:49     ` Zdenek Kabelac
2020-01-13 15:25       ` Gionatan Danti

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).