linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
@ 2019-12-08 19:35 Łukasz Czerpak
  2019-12-08 20:47 ` Łukasz Czerpak
  0 siblings, 1 reply; 9+ messages in thread
From: Łukasz Czerpak @ 2019-12-08 19:35 UTC (permalink / raw)
  To: linux-lvm

Hi,

I cannot get my LVM working. 

The structure is as follows:

vg1 -> thinpool1 -> 11x lvs

I extended size of one of child LVs, after that ran xfs_growfs which got stuck. After 1hr I did cold reboot and thinpool reported the following:

$ lvchange -ay vg1
 WARNING: Not using lvmetad because a repair command was run.
 Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
 Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
 Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
 Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
 Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
 Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
 Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
 Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
 Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
 Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
 Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
 Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.


I tried lvconvert —repair vg1/thinpool1 but it always throws transaction id mismatch error:

$ lvconvert --repair vg1/thinpool1
 WARNING: Disabling lvmetad cache for repair command.
 WARNING: Not using lvmetad because of repair.
 Transaction id 505 from pool "vg1/thinpool1" does not match repaired transaction id 549 from /dev/mapper/vg1-lvol2_pmspare.
 WARNING: LV vg1/thinpool1_meta3 holds a backup of the unrepaired metadata. Use lvremove when no longer required.
 WARNING: New metadata LV vg1/thinpool1_tmeta might use different PVs.  Move it with pvmove if required.

I have no idea on how to proceed and more importantly *how to access/recover data in LVs*. I desperately looking for any help :(

—
Best regards,
Łukasz Czerpak

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  2019-12-08 19:35 [linux-lvm] Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505 Łukasz Czerpak
@ 2019-12-08 20:47 ` Łukasz Czerpak
  2019-12-09 13:36   ` Zdenek Kabelac
  0 siblings, 1 reply; 9+ messages in thread
From: Łukasz Czerpak @ 2019-12-08 20:47 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 2883 bytes --]

After googling a lot I figure out what to do and it worked - at least I can access the most critical data.
I’ve followed instructions from this blog post: https://blog.monotok.org/lvm-transaction-id-mismatch-and-metadata-resize-error/ <https://blog.monotok.org/lvm-transaction-id-mismatch-and-metadata-resize-error/>

However, I have no idea what was the root cause of this. I hope I can fully recover the volumes w/o re-creating the whole VG.
In case I did something terribly wrong that looked like the solution now, but may cause issues in future - I would appreciate any hints.


—
Best Regards,
Łukasz Czerpak




> On 8 Dec 2019, at 20:35, Łukasz Czerpak <lukasz.czerpak@gmail.com> wrote:
> 
> Hi,
> 
> I cannot get my LVM working. 
> 
> The structure is as follows:
> 
> vg1 -> thinpool1 -> 11x lvs
> 
> I extended size of one of child LVs, after that ran xfs_growfs which got stuck. After 1hr I did cold reboot and thinpool reported the following:
> 
> $ lvchange -ay vg1
> WARNING: Not using lvmetad because a repair command was run.
> Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
> Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
> Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
> Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
> Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
> Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
> Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
> Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
> Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
> Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
> Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
> Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
> 
> 
> I tried lvconvert —repair vg1/thinpool1 but it always throws transaction id mismatch error:
> 
> $ lvconvert --repair vg1/thinpool1
> WARNING: Disabling lvmetad cache for repair command.
> WARNING: Not using lvmetad because of repair.
> Transaction id 505 from pool "vg1/thinpool1" does not match repaired transaction id 549 from /dev/mapper/vg1-lvol2_pmspare.
> WARNING: LV vg1/thinpool1_meta3 holds a backup of the unrepaired metadata. Use lvremove when no longer required.
> WARNING: New metadata LV vg1/thinpool1_tmeta might use different PVs.  Move it with pvmove if required.
> 
> I have no idea on how to proceed and more importantly *how to access/recover data in LVs*. I desperately looking for any help :(
> 
> —
> Best regards,
> Łukasz Czerpak
> 
> 
> 


[-- Attachment #2: Type: text/html, Size: 3995 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  2019-12-08 20:47 ` Łukasz Czerpak
@ 2019-12-09 13:36   ` Zdenek Kabelac
  2019-12-09 13:50     ` Łukasz Czerpak
  0 siblings, 1 reply; 9+ messages in thread
From: Zdenek Kabelac @ 2019-12-09 13:36 UTC (permalink / raw)
  To: LVM general discussion and development, Łukasz Czerpak

Dne 08. 12. 19 v 21:47 Łukasz Czerpak napsal(a):
> After googling a lot I figure out what to do and it worked - at least I can 
> access the most critical data.
> I’ve followed instructions from this blog post: 
> https://blog.monotok.org/lvm-transaction-id-mismatch-and-metadata-resize-error/
> 
> However, I have no idea what was the root cause of this. I hope I can fully 
> recover the volumes w/o re-creating the whole VG.
> In case I did something terribly wrong that looked like the solution now, but 
> may cause issues in future - I would appreciate any hints.
> 
>>
>> $ lvchange -ay vg1
>> WARNING: Not using lvmetad because a repair command was run.
>> Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.

Hi

What's been you lvm2 & kernel version ?

This difference is too big for 'recent' versions - there should never be more
then one - unless you are using old kernel & old lvm2.

Regards

Zdenek

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  2019-12-09 13:36   ` Zdenek Kabelac
@ 2019-12-09 13:50     ` Łukasz Czerpak
  2019-12-09 13:52       ` Łukasz Czerpak
  2019-12-09 13:59       ` Zdenek Kabelac
  0 siblings, 2 replies; 9+ messages in thread
From: Łukasz Czerpak @ 2019-12-09 13:50 UTC (permalink / raw)
  To: Zdenek Kabelac; +Cc: LVM general discussion and development

hi,

It’s Ubuntu 18.04.3:

$ lvm version
  LVM version:     2.02.176(2) (2017-11-03)
  Library version: 1.02.145 (2017-11-03)
  Driver version:  4.37.0

$ uname -a
Linux gandalf 4.15.0-72-generic #81-Ubuntu SMP Tue Nov 26 12:20:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

It’s weird as same error occurred few minutes ago. I wanted to take snapshot of thin volume and it first returned the following error:

$ lvcreate -s --name vmail-data-snapshot vg1/vmail-data                                                                                                                                       
Using default stripesize 64.00 KiB.                                                                                                                                                         
Can't create snapshot vmail-data-snapshot as origin vmail-data is not suspended.                                                                                                            
Failed to suspend thin snapshot origin vg1/vmail-data.

Then I tried with different volume:

$ lvcreate -s --name owncloud-data-snapshot vg1/owncloud-data                                                                                                                                 
Using default stripesize 64.00 KiB.
Thin pool vg1-thinpool1-tpool (253:2) transaction_id is 574, while expected 572.
Failed to suspend vg1/thinpool1 with queued messages.

Same error when then tried to export LXD’s container:

$ lvcreate -s --name owncloud-data-snapshot vg1/owncloud-data
  Using default stripesize 64.00 KiB.
  Thin pool vg1-thinpool1-tpool (253:2) transaction_id is 574, while expected 572.
  Failed to suspend vg1/thinpool1 with queued messages.

I did vgcfgbackup and transaction_id for thinpool1 was 573. I really don’t know what’s going on.
Wondering if this might be caused by LXD running as snap which is known to not interact with system's lvmetad and thus giving out of date information. LXD is configured to use thinpool1 as storage.

Maybe after I did vgcfgbackup, updated mismatching transaction_id and restored with vgcfgrestore, I got access to data and fake impression all was fixed.

—
Best Regards,
Łukasz Czerpak




> On 9 Dec 2019, at 14:36, Zdenek Kabelac <zkabelac@redhat.com> wrote:
> 
> Dne 08. 12. 19 v 21:47 Łukasz Czerpak napsal(a):
>> After googling a lot I figure out what to do and it worked - at least I can access the most critical data.
>> I’ve followed instructions from this blog post: https://blog.monotok.org/lvm-transaction-id-mismatch-and-metadata-resize-error/
>> However, I have no idea what was the root cause of this. I hope I can fully recover the volumes w/o re-creating the whole VG.
>> In case I did something terribly wrong that looked like the solution now, but may cause issues in future - I would appreciate any hints.
>>> 
>>> $ lvchange -ay vg1
>>> WARNING: Not using lvmetad because a repair command was run.
>>> Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
> 
> Hi
> 
> What's been you lvm2 & kernel version ?
> 
> This difference is too big for 'recent' versions - there should never be more
> then one - unless you are using old kernel & old lvm2.
> 
> Regards
> 
> Zdenek
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  2019-12-09 13:50     ` Łukasz Czerpak
@ 2019-12-09 13:52       ` Łukasz Czerpak
  2019-12-09 13:59       ` Zdenek Kabelac
  1 sibling, 0 replies; 9+ messages in thread
From: Łukasz Czerpak @ 2019-12-09 13:52 UTC (permalink / raw)
  To: Zdenek Kabelac; +Cc: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 3714 bytes --]

Forgot to paste ticket about lxd and out of date information: https://github.com/lxc/lxd/issues/4445 <https://github.com/lxc/lxd/issues/4445>

--
pozdrawiam,
Łukasz Czerpak




> On 9 Dec 2019, at 14:50, Łukasz Czerpak <lukasz.czerpak@gmail.com> wrote:
> 
> hi,
> 
> It’s Ubuntu 18.04.3:
> 
> $ lvm version
>  LVM version:     2.02.176(2) (2017-11-03)
>  Library version: 1.02.145 (2017-11-03)
>  Driver version:  4.37.0
> 
> $ uname -a
> Linux gandalf 4.15.0-72-generic #81-Ubuntu SMP Tue Nov 26 12:20:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
> 
> It’s weird as same error occurred few minutes ago. I wanted to take snapshot of thin volume and it first returned the following error:
> 
> $ lvcreate -s --name vmail-data-snapshot vg1/vmail-data                                                                                                                                       
> Using default stripesize 64.00 KiB.                                                                                                                                                         
> Can't create snapshot vmail-data-snapshot as origin vmail-data is not suspended.                                                                                                            
> Failed to suspend thin snapshot origin vg1/vmail-data.
> 
> Then I tried with different volume:
> 
> $ lvcreate -s --name owncloud-data-snapshot vg1/owncloud-data                                                                                                                                 
> Using default stripesize 64.00 KiB.
> Thin pool vg1-thinpool1-tpool (253:2) transaction_id is 574, while expected 572.
> Failed to suspend vg1/thinpool1 with queued messages.
> 
> Same error when then tried to export LXD’s container:
> 
> $ lvcreate -s --name owncloud-data-snapshot vg1/owncloud-data
>  Using default stripesize 64.00 KiB.
>  Thin pool vg1-thinpool1-tpool (253:2) transaction_id is 574, while expected 572.
>  Failed to suspend vg1/thinpool1 with queued messages.
> 
> I did vgcfgbackup and transaction_id for thinpool1 was 573. I really don’t know what’s going on.
> Wondering if this might be caused by LXD running as snap which is known to not interact with system's lvmetad and thus giving out of date information. LXD is configured to use thinpool1 as storage.
> 
> Maybe after I did vgcfgbackup, updated mismatching transaction_id and restored with vgcfgrestore, I got access to data and fake impression all was fixed.
> 
> —
> Best Regards,
> Łukasz Czerpak
> 
> 
> 
> 
>> On 9 Dec 2019, at 14:36, Zdenek Kabelac <zkabelac@redhat.com> wrote:
>> 
>> Dne 08. 12. 19 v 21:47 Łukasz Czerpak napsal(a):
>>> After googling a lot I figure out what to do and it worked - at least I can access the most critical data.
>>> I’ve followed instructions from this blog post: https://blog.monotok.org/lvm-transaction-id-mismatch-and-metadata-resize-error/
>>> However, I have no idea what was the root cause of this. I hope I can fully recover the volumes w/o re-creating the whole VG.
>>> In case I did something terribly wrong that looked like the solution now, but may cause issues in future - I would appreciate any hints.
>>>> 
>>>> $ lvchange -ay vg1
>>>> WARNING: Not using lvmetad because a repair command was run.
>>>> Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
>> 
>> Hi
>> 
>> What's been you lvm2 & kernel version ?
>> 
>> This difference is too big for 'recent' versions - there should never be more
>> then one - unless you are using old kernel & old lvm2.
>> 
>> Regards
>> 
>> Zdenek
>> 
> 


[-- Attachment #2: Type: text/html, Size: 7851 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  2019-12-09 13:50     ` Łukasz Czerpak
  2019-12-09 13:52       ` Łukasz Czerpak
@ 2019-12-09 13:59       ` Zdenek Kabelac
  2019-12-09 14:18         ` Łukasz Czerpak
  1 sibling, 1 reply; 9+ messages in thread
From: Zdenek Kabelac @ 2019-12-09 13:59 UTC (permalink / raw)
  To: Łukasz Czerpak; +Cc: LVM general discussion and development

Dne 09. 12. 19 v 14:50 Łukasz Czerpak napsal(a):
> hi,
> 
> It’s Ubuntu 18.04.3:
> 
> $ lvm version
>    LVM version:     2.02.176(2) (2017-11-03)
>    Library version: 1.02.145 (2017-11-03)
>    Driver version:  4.37.0
> 
> $ uname -a
> Linux gandalf 4.15.0-72-generic #81-Ubuntu SMP Tue Nov 26 12:20:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
> 
> It’s weird as same error occurred few minutes ago. I wanted to take snapshot of thin volume and it first returned the following error:
> 
> $ lvcreate -s --name vmail-data-snapshot vg1/vmail-data
> Using default stripesize 64.00 KiB.
> Can't create snapshot vmail-data-snapshot as origin vmail-data is not suspended.
> Failed to suspend thin snapshot origin vg1/vmail-data.
> 
> Then I tried with different volume:
> 
> $ lvcreate -s --name owncloud-data-snapshot vg1/owncloud-data
> Using default stripesize 64.00 KiB.
> Thin pool vg1-thinpool1-tpool (253:2) transaction_id is 574, while expected 572.
> Failed to suspend vg1/thinpool1 with queued messages.
> 
> Same error when then tried to export LXD’s container:
> 

Hi

While I'd highly recommend to move to kernel 4.20 (at least) - from name of 
your volumes - it does look like you are using thinp in some 'cloud' environment.

For thin-pool it's critically important to always have thin-pool active only 
on a single machine.  You must never run thin-pool activate on multiple 
machines (even if one machine is not using it - but just has it active).

So we have seen already many times user have actived thin-pool on their host 
machines and then passed devices into virtual machines and used there the same 
thin-pool (so thin-pool has been activate multiple times at the same time).

So please carefully check if this is not your case - as this would nicely 
explain why your 'transaction_id' got so much different.

Regards

Zdenek

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  2019-12-09 13:59       ` Zdenek Kabelac
@ 2019-12-09 14:18         ` Łukasz Czerpak
  2019-12-09 14:25           ` Zdenek Kabelac
  0 siblings, 1 reply; 9+ messages in thread
From: Łukasz Czerpak @ 2019-12-09 14:18 UTC (permalink / raw)
  To: Zdenek Kabelac; +Cc: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 2635 bytes --]

hi,

Sure, I will update the kernel as per your recommendation. Thank you for help and prompt replies!
In regards to “sharing thin-pool” - there are no VMs, only LXD that is using VG and thin-pool. After digging more I found relevant article:

https://discuss.linuxcontainers.org/t/is-it-safe-to-create-an-lvm-backed-storage-pool-that-can-be-shared-with-other-logical-volumes/5658/5 <https://discuss.linuxcontainers.org/t/is-it-safe-to-create-an-lvm-backed-storage-pool-that-can-be-shared-with-other-logical-volumes/5658/5>

This might be the reason. I will investigate it more and share results here.

--
pozdrawiam,
Łukasz Czerpak




> On 9 Dec 2019, at 14:59, Zdenek Kabelac <zkabelac@redhat.com> wrote:
> 
> Dne 09. 12. 19 v 14:50 Łukasz Czerpak napsal(a):
>> hi,
>> It’s Ubuntu 18.04.3:
>> $ lvm version
>>   LVM version:     2.02.176(2) (2017-11-03)
>>   Library version: 1.02.145 (2017-11-03)
>>   Driver version:  4.37.0
>> $ uname -a
>> Linux gandalf 4.15.0-72-generic #81-Ubuntu SMP Tue Nov 26 12:20:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
>> It’s weird as same error occurred few minutes ago. I wanted to take snapshot of thin volume and it first returned the following error:
>> $ lvcreate -s --name vmail-data-snapshot vg1/vmail-data
>> Using default stripesize 64.00 KiB.
>> Can't create snapshot vmail-data-snapshot as origin vmail-data is not suspended.
>> Failed to suspend thin snapshot origin vg1/vmail-data.
>> Then I tried with different volume:
>> $ lvcreate -s --name owncloud-data-snapshot vg1/owncloud-data
>> Using default stripesize 64.00 KiB.
>> Thin pool vg1-thinpool1-tpool (253:2) transaction_id is 574, while expected 572.
>> Failed to suspend vg1/thinpool1 with queued messages.
>> Same error when then tried to export LXD’s container:
> 
> Hi
> 
> While I'd highly recommend to move to kernel 4.20 (at least) - from name of your volumes - it does look like you are using thinp in some 'cloud' environment.
> 
> For thin-pool it's critically important to always have thin-pool active only on a single machine.  You must never run thin-pool activate on multiple machines (even if one machine is not using it - but just has it active).
> 
> So we have seen already many times user have actived thin-pool on their host machines and then passed devices into virtual machines and used there the same thin-pool (so thin-pool has been activate multiple times at the same time).
> 
> So please carefully check if this is not your case - as this would nicely explain why your 'transaction_id' got so much different.
> 
> Regards
> 
> Zdenek


[-- Attachment #2: Type: text/html, Size: 11674 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  2019-12-09 14:18         ` Łukasz Czerpak
@ 2019-12-09 14:25           ` Zdenek Kabelac
  0 siblings, 0 replies; 9+ messages in thread
From: Zdenek Kabelac @ 2019-12-09 14:25 UTC (permalink / raw)
  To: Łukasz Czerpak; +Cc: LVM general discussion and development

Dne 09. 12. 19 v 15:18 Łukasz Czerpak napsal(a):
> hi,
> 
> Sure, I will update the kernel as per your recommendation. Thank you for help 
> and prompt replies!
> In regards to “sharing thin-pool” - there are no VMs, only LXD that is using 
> VG and thin-pool. After digging more I found relevant article:
> 
> https://discuss.linuxcontainers.org/t/is-it-safe-to-create-an-lvm-backed-storage-pool-that-can-be-shared-with-other-logical-volumes/5658/5
> 
> This might be the reason. I will investigate it more and share results here.

Usage of any containers with DM is seriously non-trivial task (especially if 
you are dealing with anything more complex then 'linear' dm target).

Linux device is not a containerized resource a there need to exist something 
like a 'cluster locking' mechanism how to manipulate with devices and metadata.

If you are on a single host - there is used 'file locking' - but if you start 
to manipulate lvm2 metadata from multiple containers at the same time - 
without 'locking' mechanism between all commands - it will soon go ballistic 
and explode...  (and it's actually weird you managed to go as high as 500 
transactions without noticing problem...)

Zdenek

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [linux-lvm] Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
@ 2019-12-08 19:30 Łukasz Czerpak
  0 siblings, 0 replies; 9+ messages in thread
From: Łukasz Czerpak @ 2019-12-08 19:30 UTC (permalink / raw)
  To: linux-lvm

Hi,

I cannot get my LVM working. 

The structure is as follows:

vg1 -> thinpool1 -> 11x lvs

I extended size of one of child LVs, after that ran xfs_growfs which got stuck. After 1hr I did cold reboot and thinpool reported the following:

$ lvchange -ay vg1
  WARNING: Not using lvmetad because a repair command was run.
  Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.
  Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.


I tried lvconvert —repair vg1/thinpool1 but it always throws transaction id mismatch error:

$ lvconvert --repair vg1/thinpool1
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  Transaction id 505 from pool "vg1/thinpool1" does not match repaired transaction id 549 from /dev/mapper/vg1-lvol2_pmspare.
  WARNING: LV vg1/thinpool1_meta3 holds a backup of the unrepaired metadata. Use lvremove when no longer required.
  WARNING: New metadata LV vg1/thinpool1_tmeta might use different PVs.  Move it with pvmove if required.

I have no idea on how to proceed and more importantly *how to access/recover data in LVs*. I desperately looking for any help :(

—
Best regards,
Łukasz Czerpak

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-12-09 14:25 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-08 19:35 [linux-lvm] Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505 Łukasz Czerpak
2019-12-08 20:47 ` Łukasz Czerpak
2019-12-09 13:36   ` Zdenek Kabelac
2019-12-09 13:50     ` Łukasz Czerpak
2019-12-09 13:52       ` Łukasz Czerpak
2019-12-09 13:59       ` Zdenek Kabelac
2019-12-09 14:18         ` Łukasz Czerpak
2019-12-09 14:25           ` Zdenek Kabelac
  -- strict thread matches above, loose matches on Subject: below --
2019-12-08 19:30 Łukasz Czerpak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).