lvm-devel.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
@ 2023-04-25 13:49 haaber
  2023-04-26 11:10 ` Zdenek Kabelac
  2023-04-26 12:06 ` Ming Hung Tsai
  0 siblings, 2 replies; 21+ messages in thread
From: haaber @ 2023-04-25 13:49 UTC (permalink / raw)
  To: lvm-devel

Dear all,

I had a lethally bad hardware failure and to replace the machine.  Now I try to get some data back that is not contained in half-year backups ... (I know! but it's too late to be sorry). OK,  the old SSD is attached  via usb adapter to a brand new machine. I started

sudo pvscan
sudo vgscan --mknodes
sudo vgchange -ay

Here is the  unexpected output:

 ?PV /dev/mapper/OLDSSD ? VG   vg0 ????? lvm2 [238.27 GiB / <15.79 GiB free]
 ? Total: 1 [238.27 GiB] / in use: 1 [238.27 GiB] / in no VG: 0 [0?? ]
 ? Found volume group "vg0" using metadata type lvm2
 ? Check of pool vg0/pool00 failed (status:1). Manual repair required!
 ? 1 logical volume(s) in volume group "vg0" now active

then I consulted dr. google for diagnosis, but found only little help. This one

https://mellowhost.com/billing/index.php?rp=/knowledgebase/65/How-to-Repair-a-lvm-thin-pool.html

suggested to deactivate all sub-volumes so that a repair can work correctly. It happened that only swap was
active, so I deactivated it. But repair does still not work:

lvconvert --repair vg0/pool00
terminate called after throwing an instance of 'std::runtime_error'
 ? what():? transaction_manager::new_block() couldn't allocate new block
 ? Child 21255 exited abnormally
 ? Repair of thin metadata volume of thin pool vg0/pool00 failed
(status:-1). Manual repair required!


I would like to find a good soul out there that can give more hints. In particular,
could it be a metadata overflow? How to check? I seek not for repair, but a "once only"
read access to the pool data ....

thank you so much!   Bernhard


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-04-25 13:49 Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure haaber
@ 2023-04-26 11:10 ` Zdenek Kabelac
  2023-04-26 13:12   ` haaber
  2023-04-26 12:06 ` Ming Hung Tsai
  1 sibling, 1 reply; 21+ messages in thread
From: Zdenek Kabelac @ 2023-04-26 11:10 UTC (permalink / raw)
  To: lvm-devel

Dne 25. 04. 23 v 15:49 haaber napsal(a):
> Dear all,
> 
> I had a lethally bad hardware failure and to replace the machine.? Now I try 
> to get some data back that is not contained in half-year backups ... (I know! 
> but it's too late to be sorry). OK,? the old SSD is attached? via usb adapter 
> to a brand new machine. I started
> 
> sudo pvscan
> sudo vgscan --mknodes
> sudo vgchange -ay
> 
> Here is the? unexpected output:
> 
>  ?PV /dev/mapper/OLDSSD ? VG?? vg0 ????? lvm2 [238.27 GiB / <15.79 GiB free]
>  ? Total: 1 [238.27 GiB] / in use: 1 [238.27 GiB] / in no VG: 0 [0?? ]
>  ? Found volume group "vg0" using metadata type lvm2
>  ? Check of pool vg0/pool00 failed (status:1). Manual repair required!
>  ? 1 logical volume(s) in volume group "vg0" now active
> 
> then I consulted dr. google for diagnosis, but found only little help. This one
> 
> https://mellowhost.com/billing/index.php?rp=/knowledgebase/65/How-to-Repair-a-lvm-thin-pool.html
> 
> suggested to deactivate all sub-volumes so that a repair can work correctly. 
> It happened that only swap was
> active, so I deactivated it. But repair does still not work:
> 
> lvconvert --repair vg0/pool00
> terminate called after throwing an instance of 'std::runtime_error'
>  ? what():? transaction_manager::new_block() couldn't allocate new block
>  ? Child 21255 exited abnormally
>  ? Repair of thin metadata volume of thin pool vg0/pool00 failed
> (status:-1). Manual repair required!
> 
> 
> I would like to find a good soul out there that can give more hints. In 
> particular,
> could it be a metadata overflow? How to check? I seek not for repair, but a 
> "once only"
> read access to the pool data ....
> 

Hi

Check  'man lvmthin'  "Metadata check and repair" section.
If the 'repair' does not work make sure you have 'latest' thin_repair tool (>= 
v0.9) - as older distros come with ancient less capable version of this tool.

Since you likely already tried to repair metadata - you may need to do the 
manual repair with the use  of  _meta0  LV  (see man lvmthin).

If you cannot get  'workable' metadata with  0.9 of thin_repair tool - you 
will likely need to create a BZ - upload compressed content of your metadata 
device for futher analysis - whether it's somehow possible to recover bTree.


Regards

Zdenek


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-04-25 13:49 Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure haaber
  2023-04-26 11:10 ` Zdenek Kabelac
@ 2023-04-26 12:06 ` Ming Hung Tsai
  1 sibling, 0 replies; 21+ messages in thread
From: Ming Hung Tsai @ 2023-04-26 12:06 UTC (permalink / raw)
  To: lvm-devel

Hi,

On Tue, Apr 25, 2023 at 9:54?PM haaber <haaber@web.de> wrote:
>
> Dear all,
>
> I had a lethally bad hardware failure and to replace the machine.  Now I try to get some data back that is not contained in half-year backups ... (I know! but it's too late to be sorry). OK,  the old SSD is attached  via usb adapter to a brand new machine. I started
>
> sudo pvscan
> sudo vgscan --mknodes
> sudo vgchange -ay
>
> Here is the  unexpected output:
>
>   PV /dev/mapper/OLDSSD   VG   vg0       lvm2 [238.27 GiB / <15.79 GiB free]
>    Total: 1 [238.27 GiB] / in use: 1 [238.27 GiB] / in no VG: 0 [0   ]
>    Found volume group "vg0" using metadata type lvm2
>    Check of pool vg0/pool00 failed (status:1). Manual repair required!
>    1 logical volume(s) in volume group "vg0" now active
>
> then I consulted dr. google for diagnosis, but found only little help. This one
>
> https://mellowhost.com/billing/index.php?rp=/knowledgebase/65/How-to-Repair-a-lvm-thin-pool.html
>
> suggested to deactivate all sub-volumes so that a repair can work correctly. It happened that only swap was
> active, so I deactivated it. But repair does still not work:
>
> lvconvert --repair vg0/pool00
> terminate called after throwing an instance of 'std::runtime_error'
>    what():  transaction_manager::new_block() couldn't allocate new block
>    Child 21255 exited abnormally
>    Repair of thin metadata volume of thin pool vg0/pool00 failed
> (status:-1). Manual repair required!
>
>
> I would like to find a good soul out there that can give more hints. In particular,
> could it be a metadata overflow? How to check? I seek not for repair, but a "once only"
> read access to the pool data ....

That should be the 'metadata overflow' you're referring to, i.e.,
running out of metadata space. By default, lvconvert allocates a new
metadata volume of the same size, and that might not be sufficient for
restoring a bulk of snapshots. The new version of
thin-provisioning-tools (1.0.x) has addressed this issue, so you could
give it a try. Alternatively, you might have to run thin_repair
manually on a larger metadata volume if you want to stick with the
current version.


Ming-Hung Tsai


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-04-26 11:10 ` Zdenek Kabelac
@ 2023-04-26 13:12   ` haaber
  2023-04-27  9:29     ` Zdenek Kabelac
  0 siblings, 1 reply; 21+ messages in thread
From: haaber @ 2023-04-26 13:12 UTC (permalink / raw)
  To: lvm-devel

Thank you, Ming-Hun and Zdenek for your quick replies. I answer below!

On 4/26/23 13:10, Zdenek Kabelac wrote:
> Dear all,
>>
>> I had a lethally bad hardware failure and to replace the machine.?
>> Now I try to get some data back that is not contained in half-year
>> backups ... (I know! but it's too late to be sorry). OK,? the old SSD
>> is attached? via usb adapter to a brand new machine. I started
>>
>> sudo pvscan
>> sudo vgscan --mknodes
>> sudo vgchange -ay
>>
>> Here is the? unexpected output:
>>
>> ??PV /dev/mapper/OLDSSD ? VG?? vg0 ????? lvm2 [238.27 GiB / <15.79
>> GiB free]
>> ?? Total: 1 [238.27 GiB] / in use: 1 [238.27 GiB] / in no VG: 0 [0?? ]
>> ?? Found volume group "vg0" using metadata type lvm2
>> ?? Check of pool vg0/pool00 failed (status:1). Manual repair required!
>> ?? 1 logical volume(s) in volume group "vg0" now active
>>
>> then I consulted dr. google for diagnosis, but found only little
>> help. This one
>>
>> https://mellowhost.com/billing/index.php?rp=/knowledgebase/65/How-to-Repair-a-lvm-thin-pool.html
>>
>>
>> suggested to deactivate all sub-volumes so that a repair can work
>> correctly. It happened that only swap was
>> active, so I deactivated it. But repair does still not work:
>>
>> lvconvert --repair vg0/pool00
>> terminate called after throwing an instance of 'std::runtime_error'
>> ?? what():? transaction_manager::new_block() couldn't allocate new block
>> ?? Child 21255 exited abnormally
>> ?? Repair of thin metadata volume of thin pool vg0/pool00 failed
>> (status:-1). Manual repair required!
>>
>>
>> I would like to find a good soul out there that can give more hints.
>> In particular,
>> could it be a metadata overflow? How to check? I seek not for repair,
>> but a "once only"
>> read access to the pool data ....
>>
>
> Hi
>
> Check? 'man lvmthin'? "Metadata check and repair" section.
> If the 'repair' does not work make sure you have 'latest' thin_repair
> tool (>= v0.9) - as older distros come with ancient less capable
> version of this tool.
>
> Since you likely already tried to repair metadata - you may need to do
> the manual repair with the use? of? _meta0? LV? (see man lvmthin).
>
> If you cannot get? 'workable' metadata with? 0.9 of thin_repair tool -
> you will likely need to create a BZ - upload compressed content of
> your metadata device for futher analysis - whether it's somehow
> possible to recover bTree.

I have thin_repair 0.9 installed. But I first have to dump metadata into
a file, so I invoked (after pvscan and vgscan and vgchange -an)

 ??? root at machine:~#?? thin_dump /dev/mapper/OLDSSD -o thindump.xml -r
 ??? The following field needs to be provided on the command line due to
corruption in the superblock: transaction id

Oups. So the? superblock is damaged. What should / could I serve
thin_repair as transaction id ?? Since we read only, I tried 0:

 ??? root at machine:~#? thin_dump /dev/mapper/OLDSSD -o thindump.xml -r
--transaction-id 0
 ??? The following field needs to be provided on the command line due to
corruption in the superblock: data block size

Oups. I gave it a try and added --data-block-size 128 just to see. Now
it asks for nr of data blocks ... aargh! I cannot guess that one.

Could I? " dd " the superblock for inspection into a file? Is there only
one superblock? Most fs have several ones, for exactly that reason ...
i.e: can I use a copy?

I? dd'ed the first 2M of the /dev/mapper/OLDSSD into a file, and gave it
a try. After some binary data (less than 1k), follow roughly 1M of json
type data like this

whatever {
id = "bhQocj-EJ6Y-0jXC-oAmr-lxlF-cudL-5ohI1e"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1677967121
creation_host = "dom0"
segment_count = 1

segment1 {
start_extent = 0
extent_count = 512

type = "thin"
thin_pool = "pool00"
transaction_id = 44995
device_id = 19881
}

and then many binary data again. Would this 1M (uncompressed), probably
100K bzipped data be of any help? I could post it somewhere. Again, I do
not want to have the thin pool re-usable, but just take a last "clean
copy" on a new disc ...

best, Bernhard





^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-04-26 13:12   ` haaber
@ 2023-04-27  9:29     ` Zdenek Kabelac
  2023-05-03 16:48       ` haaber
  0 siblings, 1 reply; 21+ messages in thread
From: Zdenek Kabelac @ 2023-04-27  9:29 UTC (permalink / raw)
  To: lvm-devel

Dne 26. 04. 23 v 15:12 haaber napsal(a):
> Thank you, Ming-Hun and Zdenek for your quick replies. I answer below!
> 
> On 4/26/23 13:10, Zdenek Kabelac wrote:
>> Dear all,
>>>
>>> I had a lethally bad hardware failure and to replace the machine.
>>> Now I try to get some data back that is not contained in half-year
>>> backups ... (I know! but it's too late to be sorry). OK,? the old SSD
>>> is attached? via usb adapter to a brand new machine. I started
>>>
>>> sudo pvscan
>>> sudo vgscan --mknodes
>>> sudo vgchange -ay
>>>
>>> Here is the? unexpected output:
>>>
>>> ??PV /dev/mapper/OLDSSD ? VG?? vg0 ????? lvm2 [238.27 GiB / <15.79
>>> GiB free]
>>> ?? Total: 1 [238.27 GiB] / in use: 1 [238.27 GiB] / in no VG: 0 [0?? ]
>>> ?? Found volume group "vg0" using metadata type lvm2
>>> ?? Check of pool vg0/pool00 failed (status:1). Manual repair required!
>>> ?? 1 logical volume(s) in volume group "vg0" now active
>>>
>>> then I consulted dr. google for diagnosis, but found only little
>>> help. This one
>>>
>>> https://mellowhost.com/billing/index.php?rp=/knowledgebase/65/How-to-Repair-a-lvm-thin-pool.html
>>>
>>>
>>> suggested to deactivate all sub-volumes so that a repair can work
>>> correctly. It happened that only swap was
>>> active, so I deactivated it. But repair does still not work:
>>>
>>> lvconvert --repair vg0/pool00
>>> terminate called after throwing an instance of 'std::runtime_error'
>>> ?? what():? transaction_manager::new_block() couldn't allocate new block
>>> ?? Child 21255 exited abnormally
>>> ?? Repair of thin metadata volume of thin pool vg0/pool00 failed
>>> (status:-1). Manual repair required!
>>>
>>>
>>> I would like to find a good soul out there that can give more hints.
>>> In particular,
>>> could it be a metadata overflow? How to check? I seek not for repair,
>>> but a "once only"
>>> read access to the pool data ....
>>>
>>
>> Hi
>>
>> Check? 'man lvmthin'? "Metadata check and repair" section.
>> If the 'repair' does not work make sure you have 'latest' thin_repair
>> tool (>= v0.9) - as older distros come with ancient less capable
>> version of this tool.
>>
>> Since you likely already tried to repair metadata - you may need to do
>> the manual repair with the use? of? _meta0? LV? (see man lvmthin).
>>
>> If you cannot get? 'workable' metadata with? 0.9 of thin_repair tool -
>> you will likely need to create a BZ - upload compressed content of
>> your metadata device for futher analysis - whether it's somehow
>> possible to recover bTree.
> 
> I have thin_repair 0.9 installed. But I first have to dump metadata into
> a file, so I invoked (after pvscan and vgscan and vgchange -an)
> 
>  ??? root at machine:~#?? thin_dump /dev/mapper/OLDSSD -o thindump.xml -r
>  ??? The following field needs to be provided on the command line due to
> corruption in the superblock: transaction id
> 
> Oups. So the? superblock is damaged. What should / could I serve
> thin_repair as transaction id ?? Since we read only, I tried 0:
> 
>  ??? root at machine:~#? thin_dump /dev/mapper/OLDSSD -o thindump.xml -r
> --transaction-id 0
>  ??? The following field needs to be provided on the command line due to
> corruption in the superblock: data block size
> 
> Oups. I gave it a try and added --data-block-size 128 just to see. Now
> it asks for nr of data blocks ... aargh! I cannot guess that one.

Hi

Not sure I'm getting right your process here.

There are 2 types of 'metadata' and a different recovery work needed for them.

> Could I? " dd " the superblock for inspection into a file? Is there only
> one superblock? Most fs have several ones, for exactly that reason ...
> i.e: can I use a copy?
> 
> I? dd'ed the first 2M of the /dev/mapper/OLDSSD into a file, and gave it
> a try. After some binary data (less than 1k), follow roughly 1M of json
> type data like this
> 
> whatever {
> id = "bhQocj-EJ6Y-0jXC-oAmr-lxlF-cudL-5ohI1e"
> status = ["READ", "WRITE", "VISIBLE"]
> flags = []
> creation_time = 1677967121
> creation_host = "dom0"
> segment_count = 1
> 
> segment1 {
> start_extent = 0
> extent_count = 512
> 
> type = "thin"
> thin_pool = "pool00"
> transaction_id = 44995
> device_id = 19881
> }
> 
> and then many binary data again. Would this 1M (uncompressed), probably
> 100K bzipped data be of any help? I could post it somewhere. Again, I do
> not want to have the thin pool re-usable, but just take a last "clean
> copy" on a new disc ...

There are 'lvm2'  metadata - which are stored withing the PV disk header
(by default this is located in the 1st. 1 MiB of your device).

These metadata have absolutely nothing to do with thin-pool metadata!
lvm2 just keeps the layout of blocks for your LVs.

To get to your thin-pool metadata, you have to activate LV with them. 
(lvchange -ay  vgname/thinpoolmetadata).

Once you have your thinpool metadata 'active'  (present in DM table), then
you can fire 'thin_dump --repair'  / 'thin_repair' tool.

ATM it's not clear in which state of recovery you are.

So do you have you 'lvm2' completely & usable  and can you  active LV which 
holds  thin-pool metadata ?

Can you please provide   'lvs -a'  of your volume group ?

And if you use 'thin_dump/thin_repair' - your *exact* command line you've been 
using?

Regards

Zdenek








^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-04-27  9:29     ` Zdenek Kabelac
@ 2023-05-03 16:48       ` haaber
  2023-05-04 13:17         ` Zdenek Kabelac
  0 siblings, 1 reply; 21+ messages in thread
From: haaber @ 2023-05-03 16:48 UTC (permalink / raw)
  To: lvm-devel

Dear? Zdenek,

I had a forced break, but the subject is still active..

>
> To get to your thin-pool metadata, you have to activate LV with them.
> (lvchange -ay? vgname/thinpoolmetadata).
>
my pool is called? qubes_dom0/pool00 (now you know my previous operating
system :) So I tried

lvchange -ay? qubes_dom0/thinpoolmetadata

but that fails: ? Failed to find logical volume
"qubes_dom0/thinpoolmetadata"

Then I tried

lvchange -ay qubes_dom0/pool00_tmeta

and that worked (gave a warning). But I do not know how to run
thin_repair now :((

> Can you please provide?? 'lvs -a'? of your volume group ?

I attached that file.? It's a little mess, I fear.? Thank you for your help,


best, Bernhard

-------------- next part --------------
  LV                                                 VG         Attr       LSize    Pool   Origin                                             Data%  Meta%  Move Log Cpy%Sync Convert
  [lvol0_pmspare]                                    qubes_dom0 ewi-------  108.00m                                                                                                  
  pool00                                             qubes_dom0 twi---tz-- <214.78g                                                                                                  
  [pool00_tdata]                                     qubes_dom0 Twi------- <214.78g                                                                                                  
  [pool00_tmeta]                                     qubes_dom0 ewi-------  108.00m                                                                                                  
  root                                               qubes_dom0 Vwi---tz-- <214.78g pool00                                                                                           
  swap                                               qubes_dom0 -wi-a-----   <7.50g                                                                                                  
  vm-Android-private                                 qubes_dom0 Vwi---tz--    5.00g pool00 vm-Android-private-1633545383-back                                                        
  vm-Android-private-1633545383-back                 qubes_dom0 Vwi---tz--    5.00g pool00                                                                                           
  vm-Arbeit-private                                  qubes_dom0 Vwi---tz--   30.00g pool00 vm-Arbeit-private-1678980932-back                                                         
  vm-Arbeit-private-1678980932-back                  qubes_dom0 Vwi---tz--   30.00g pool00                                                                                           
  vm-Arbeit-private-snap                             qubes_dom0 Vwi---tz--   30.00g pool00 vm-Arbeit-private                                                                         
  vm-Arbeit-root-snap                                qubes_dom0 Vwi---tz--   20.00g pool00 vm-debian-11-root-1679130937-back                                                         
  vm-Arbeit-volatile                                 qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-Fernwartung-private                             qubes_dom0 Vwi---tz--    2.00g pool00 vm-Fernwartung-private-1604748655-back                                                    
  vm-Fernwartung-private-1604748655-back             qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-GPG-keys-private                                qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-GPG-keys-private-snap                           qubes_dom0 Vwi---tz--    2.00g pool00 vm-GPG-keys-private                                                                       
  vm-GPG-keys-root-snap                              qubes_dom0 Vwi---tz--   15.00g pool00                                                                                           
  vm-GPG-keys-volatile                               qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-PLM-private                                     qubes_dom0 Vwi---tz--    2.00g pool00 vm-PLM-private-1561430058-back                                                            
  vm-PLM-private-1561430058-back                     qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-Privat-private                                  qubes_dom0 Vwi---tz--  156.25g pool00 vm-Privat-private-1658835892-back                                                         
  vm-Privat-private-1658835892-back                  qubes_dom0 Vwi---tz--  156.25g pool00                                                                                           
  vm-Privat-private-snap                             qubes_dom0 Vwi---tz--  156.25g pool00 vm-Privat-private                                                                         
  vm-Privat-root-snap                                qubes_dom0 Vwi---tz--   20.00g pool00                                                                                           
  vm-Privat-volatile                                 qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-Tresorraum-private                              qubes_dom0 Vwi---tz--    2.00g pool00 vm-Tresorraum-private-1678980925-back                                                     
  vm-Tresorraum-private-1678980925-back              qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-Tresorraum-private-snap                         qubes_dom0 Vwi---tz--    2.00g pool00 vm-Tresorraum-private                                                                     
  vm-Tresorraum-root-snap                            qubes_dom0 Vwi---tz--   20.00g pool00 vm-debian-11-root                                                                         
  vm-Tresorraum-volatile                             qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-Verwaltung-private                              qubes_dom0 Vwi---tz--    5.99g pool00 vm-Verwaltung-private-1677926231-back                                                     
  vm-Verwaltung-private-1677926231-back              qubes_dom0 Vwi---tz--    5.99g pool00                                                                                           
  vm-Verwaltung-private-snap                         qubes_dom0 Vwi---tz--    5.99g pool00 vm-Verwaltung-private                                                                     
  vm-Verwaltung-root-snap                            qubes_dom0 Vwi---tz--   20.00g pool00 vm-debian-11-root                                                                         
  vm-Verwaltung-volatile                             qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-anon-whonix-private                             qubes_dom0 Vwi---tz--    3.00g pool00 vm-anon-whonix-private-1675500726-back                                                    
  vm-anon-whonix-private-1675500726-back             qubes_dom0 Vwi---tz--    3.00g pool00                                                                                           
  vm-buster-print-private                            qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-buster-print-root                               qubes_dom0 Vwi---tz--   10.00g pool00 vm-buster-print-root-1647866494-back                                                      
  vm-buster-print-root-1647866494-back               qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-debian-11-minimal-firewall-private              qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-debian-11-minimal-firewall-root                 qubes_dom0 Vwi---tz--   10.00g pool00 vm-debian-11-minimal-firewall-root-1677968205-back                                        
  vm-debian-11-minimal-firewall-root-1677968205-back qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-debian-11-minimal-net-private                   qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-debian-11-minimal-net-root                      qubes_dom0 Vwi---tz--   10.00g pool00 vm-debian-11-minimal-net-root-1677968245-back                                             
  vm-debian-11-minimal-net-root-1677968245-back      qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-debian-11-minimal-private                       qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-debian-11-minimal-root                          qubes_dom0 Vwi---tz--   10.00g pool00 vm-debian-11-minimal-root-1640161303-back                                                 
  vm-debian-11-minimal-root-1640161303-back          qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-debian-11-minimal-usb-private                   qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-debian-11-minimal-usb-root                      qubes_dom0 Vwi---tz--   10.00g pool00 vm-debian-11-minimal-usb-root-1640161428-back                                             
  vm-debian-11-minimal-usb-root-1640161428-back      qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-debian-11-private                               qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-debian-11-root                                  qubes_dom0 Vwi---tz--   20.00g pool00 vm-debian-11-root-1679130937-back                                                         
  vm-debian-11-root-1679130937-back                  qubes_dom0 Vwi---tz--   20.00g pool00                                                                                           
  vm-debian-dvm-private                              qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-default-mgmt-dvm-private                        qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-disp383-private-snap                            qubes_dom0 Vwi---tz--    2.00g pool00 vm-debian-dvm-private                                                                     
  vm-disp383-root-snap                               qubes_dom0 Vwi---tz--   20.00g pool00 vm-debian-11-root-1679130937-back                                                         
  vm-disp383-volatile                                qubes_dom0 Vwi---tz--   12.00g pool00                                                                                           
  vm-dummy-private                                   qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-dummy-root                                      qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-flashing-private                                qubes_dom0 Vwi---tz--    2.00g pool00 vm-flashing-private-1647856203-back                                                       
  vm-flashing-private-1647856203-back                qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-mail-privat-private                             qubes_dom0 Vwi---tz--    4.00g pool00 vm-mail-privat-private-1678980932-back                                                    
  vm-mail-privat-private-1678980932-back             qubes_dom0 Vwi---tz--    4.00g pool00                                                                                           
  vm-mail-privat-private-snap                        qubes_dom0 Vwi---tz--    4.00g pool00 vm-mail-privat-private                                                                    
  vm-mail-privat-root-snap                           qubes_dom0 Vwi---tz--   20.00g pool00 vm-debian-11-root-1679130937-back                                                         
  vm-mail-privat-volatile                            qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-mirage-builder-private                          qubes_dom0 Vwi---tz--   20.00g pool00 vm-mirage-builder-private-1652136334-back                                                 
  vm-mirage-builder-private-1652136334-back          qubes_dom0 Vwi---tz--   20.00g pool00                                                                                           
  vm-mirage-firewall-private                         qubes_dom0 Vwi---tz--    2.00g pool00 vm-mirage-firewall-private-1678980930-back                                                
  vm-mirage-firewall-private-1678980930-back         qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-mirage-firewall-private-snap                    qubes_dom0 Vwi---tz--    2.00g pool00 vm-mirage-firewall-private                                                                
  vm-mirage-firewall-root                            qubes_dom0 Vwi---tz--   10.00g pool00 vm-mirage-firewall-root-1678980930-back                                                   
  vm-mirage-firewall-root-1678980930-back            qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-mirage-firewall-root-snap                       qubes_dom0 Vwi---tz--   10.00g pool00 vm-mirage-firewall-root                                                                   
  vm-mirage-firewall-volatile                        qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-printing-private                                qubes_dom0 Vwi---tz--    2.00g pool00 vm-printing-private-1647858008-back                                                       
  vm-printing-private-1647858008-back                qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-sshkeys-private                                 qubes_dom0 Vwi---tz--    2.00g pool00 vm-sshkeys-private-1623251897-back                                                        
  vm-sshkeys-private-1623251897-back                 qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-sshkeys-private-snap                            qubes_dom0 Vwi---tz--    2.00g pool00 vm-sshkeys-private                                                                        
  vm-sshkeys-root-snap                               qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-sshkeys-volatile                                qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-sys-firewall-private                            qubes_dom0 Vwi---tz--    2.00g pool00 vm-sys-firewall-private-1678980932-back                                                   
  vm-sys-firewall-private-1678980932-back            qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-sys-firewall-private-snap                       qubes_dom0 Vwi---tz--    2.00g pool00 vm-sys-firewall-private                                                                   
  vm-sys-firewall-root-snap                          qubes_dom0 Vwi---tz--   10.00g pool00 vm-debian-11-minimal-firewall-root                                                        
  vm-sys-firewall-volatile                           qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-sys-net-private                                 qubes_dom0 Vwi---tz--    2.00g pool00 vm-sys-net-private-1678980929-back                                                        
  vm-sys-net-private-1678980929-back                 qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-sys-net-private-snap                            qubes_dom0 Vwi---tz--    2.00g pool00 vm-sys-net-private                                                                        
  vm-sys-net-root-snap                               qubes_dom0 Vwi---tz--   10.00g pool00 vm-debian-11-minimal-net-root                                                             
  vm-sys-net-volatile                                qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-sys-usb-private                                 qubes_dom0 Vwi---tz--    5.00g pool00 vm-sys-usb-private-1679256316-back                                                        
  vm-sys-usb-private-1679256316-back                 qubes_dom0 Vwi---tz--    5.00g pool00                                                                                           
  vm-sys-whonix-private                              qubes_dom0 Vwi---tz--    2.00g pool00 vm-sys-whonix-private-1678980932-back                                                     
  vm-sys-whonix-private-1678980932-back              qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-sys-whonix-private-snap                         qubes_dom0 Vwi---tz--    2.00g pool00 vm-sys-whonix-private                                                                     
  vm-sys-whonix-root-snap                            qubes_dom0 Vwi---tz--   10.00g pool00 vm-whonix-gw-16-root-1679305216-back                                                      
  vm-sys-whonix-volatile                             qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-tribler-private                                 qubes_dom0 Vwi---tz--   20.00g pool00 vm-tribler-private-1671645867-back                                                        
  vm-tribler-private-1671645867-back                 qubes_dom0 Vwi---tz--   20.00g pool00                                                                                           
  vm-untrusted-private                               qubes_dom0 Vwi---tz--   42.00g pool00 vm-untrusted-private-1678980932-back                                                      
  vm-untrusted-private-1678980932-back               qubes_dom0 Vwi---tz--   42.00g pool00                                                                                           
  vm-untrusted-private-snap                          qubes_dom0 Vwi---tz--   42.00g pool00 vm-untrusted-private                                                                      
  vm-untrusted-root-snap                             qubes_dom0 Vwi---tz--   20.00g pool00 vm-debian-11-root-1679130937-back                                                         
  vm-untrusted-volatile                              qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-videokonferenz-private                          qubes_dom0 Vwi---tz--    2.00g pool00 vm-videokonferenz-private-1621252915-back                                                 
  vm-videokonferenz-private-1621252915-back          qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-whonix-gw-16-private                            qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-whonix-gw-16-root-1679305216-back               qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           
  vm-whonix-ws-15-dvm-private                        qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-whonix-ws-16-private                            qubes_dom0 Vwi---tz--    2.00g pool00                                                                                           
  vm-whonix-ws-16-root                               qubes_dom0 Vwi---tz--   10.00g pool00 vm-whonix-ws-16-root-1674925902-back                                                      
  vm-whonix-ws-16-root-1674925902-back               qubes_dom0 Vwi---tz--   10.00g pool00                                                                                           

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-05-03 16:48       ` haaber
@ 2023-05-04 13:17         ` Zdenek Kabelac
  2023-05-04 16:31           ` haaber
  2023-05-04 17:06           ` haaber
  0 siblings, 2 replies; 21+ messages in thread
From: Zdenek Kabelac @ 2023-05-04 13:17 UTC (permalink / raw)
  To: lvm-devel

Dne 03. 05. 23 v 18:48 haaber napsal(a):
> Dear Zdenek,
>
> I had a forced break, but the subject is still active..
>
>>
>> To get to your thin-pool metadata, you have to activate LV with them.
>> (lvchange -ay? vgname/thinpoolmetadata).
>>
> my pool is called? qubes_dom0/pool00 (now you know my previous operating
> system :) So I tried
>
> lvchange -ay? qubes_dom0/thinpoolmetadata
>
> but that fails: ? Failed to find logical volume
> "qubes_dom0/thinpoolmetadata"
>
> Then I tried
>
> lvchange -ay qubes_dom0/pool00_tmeta


Looking at your 'lvs -a' output - you should be able to get this one active.


You will need another LV to write fixed metadata into

# lvcreate -L128M -n newlv? qubes_dom0


Then you run

# thin_repair -i /dev/qubes_dom0/pool00_tmeta -o /dev/qubes_dom0/newlv

If you get some repaired set? which you could probably validate with

 ?# thin_dump?? /dev/qubes_dom0/newlv

if you will some some 'good amount' of some data which describe block mapping 
for many thin volumes.

However if you only see couple lines - basically empty thin-pool metadta - you 
will need to store the content of your original unmodified metadata into a 
compressed file and upload the file for futher exploration.

Let me know what you get from those steps above.


Zdenek




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-05-04 13:17         ` Zdenek Kabelac
@ 2023-05-04 16:31           ` haaber
  2023-05-05 15:14             ` Zdenek Kabelac
  2023-05-04 17:06           ` haaber
  1 sibling, 1 reply; 21+ messages in thread
From: haaber @ 2023-05-04 16:31 UTC (permalink / raw)
  To: lvm-devel

Dear Zdenek

On 5/4/23 15:17, Zdenek Kabelac wrote:
>
>> lvchange -ay qubes_dom0/pool00_tmeta
>
>
> Looking at your 'lvs -a' output - you should be able to get this one
> active.
>
>
> You will need another LV to write fixed metadata into
>
> # lvcreate -L128M -n newlv? qubes_dom0

He was yelling at me:

# lvcreate -L128M -n newlv? qubes_dom0

 ? WARNING: Sum of all thin volume sizes (<1.62 TiB) exceeds the size of
thin pools and the size of whole volume group (238.27 GiB).
 ? WARNING: You have not turned on protection against thin pools running
out of space.
 ? WARNING: Set activation/thin_pool_autoextend_threshold below 100 to
trigger automatic extension of thin pools before they get full.
 ? Logical volume "newlv" created.

But since he seemed to live with it, I gave it a try and continued. So I
activated tmeta by

lvchange -ay qubes_dom0/pool00_tmeta

>
> Then you run
>
> # thin_repair -i /dev/qubes_dom0/pool00_tmeta -o /dev/qubes_dom0/newlv
>
# thin_repair?? -i /dev/qubes_dom0/pool00_tmeta? -o /dev/qubes_dom0/newlv
terminate called after throwing an instance of 'std::runtime_error'
 ? what():? transaction_manager::new_block() couldn't allocate new block
Aborted


>
> Let me know what you get from those steps above.
>
and that is where I am stuck now. By the way:

# thin_dump?? /dev/qubes_dom0/newlv
bad checksum in superblock, wanted 1490015127


thank you for your help & time,? Bernhard



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-05-04 13:17         ` Zdenek Kabelac
  2023-05-04 16:31           ` haaber
@ 2023-05-04 17:06           ` haaber
  2023-05-05  9:42             ` Ming Hung Tsai
  2023-05-05 15:07             ` Zdenek Kabelac
  1 sibling, 2 replies; 21+ messages in thread
From: haaber @ 2023-05-04 17:06 UTC (permalink / raw)
  To: lvm-devel

Dear Zdenek,

here https://we.tl/t-41PiPG2V1G???? is the output of

#thin_dump?? /dev/qubes_dom0/pool00_tmeta? > pool00_tmeta

metadata contains errors (run thin_check for details).

perhaps you wanted to run with --repairmetadata contains errors (run
thin_check for details).

perhaps you wanted to run with --repair


best, Bernhard






^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-05-04 17:06           ` haaber
@ 2023-05-05  9:42             ` Ming Hung Tsai
  2023-05-05 15:07             ` Zdenek Kabelac
  1 sibling, 0 replies; 21+ messages in thread
From: Ming Hung Tsai @ 2023-05-05  9:42 UTC (permalink / raw)
  To: lvm-devel

Hi,

The thin_dump output looks fine, so I would like to know why there's error
messages emitted. Could you help provide the raw metadata dump please? Just
'dd' the entire '/dev/qubes_dom0/pool00_tmeta' into a file, then upload the
compressed file.


Thanks,
Ming-Hung Tsai

On Fri, May 5, 2023 at 1:07?AM haaber <haaber@web.de> wrote:

> Dear Zdenek,
>
> here https://we.tl/t-41PiPG2V1G     is the output of
>
> #thin_dump   /dev/qubes_dom0/pool00_tmeta  > pool00_tmeta
>
> metadata contains errors (run thin_check for details).
>
> perhaps you wanted to run with --repairmetadata contains errors (run
> thin_check for details).
>
> perhaps you wanted to run with --repair
>
>
> best, Bernhard
>
>
>
>
>
> --
> lvm-devel mailing list
> lvm-devel at redhat.com
> https://listman.redhat.com/mailman/listinfo/lvm-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/lvm-devel/attachments/20230505/c1dec7d8/attachment.htm>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-05-04 17:06           ` haaber
  2023-05-05  9:42             ` Ming Hung Tsai
@ 2023-05-05 15:07             ` Zdenek Kabelac
  2023-05-05 16:25               ` Ming Hung Tsai
  2023-05-11  7:39               ` haaber
  1 sibling, 2 replies; 21+ messages in thread
From: Zdenek Kabelac @ 2023-05-05 15:07 UTC (permalink / raw)
  To: lvm-devel

Dne 04. 05. 23 v 19:06 haaber napsal(a):
> Dear Zdenek,
> 
> here https://we.tl/t-41PiPG2V1G???? is the output of
> 
> #thin_dump?? /dev/qubes_dom0/pool00_tmeta? > pool00_tmeta
> 

Hi

We need the exact binary copy of _tmeta  LV  - thus just use

dd if=/dev/qubes_dom0/pool00_tmeta  /tmp/tmeta_copy  bs=512K
bzip2 /tmp/tmeta_copy

With this data - also provide full lvm2 metadata for this VG
(should be as a file in  /etc/lvm/backup  - or you could run
just vgcfgbackup)

thin_dump is already 'processed' result - so not usable for futher investigation.

(Although it might interesting idea to add some 'option' to actually provide 
the above 'binary backup' is  built-in feature of this tool for bug reporting...)


Regards

Zdenek


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-05-04 16:31           ` haaber
@ 2023-05-05 15:14             ` Zdenek Kabelac
  0 siblings, 0 replies; 21+ messages in thread
From: Zdenek Kabelac @ 2023-05-05 15:14 UTC (permalink / raw)
  To: lvm-devel

Dne 04. 05. 23 v 18:31 haaber napsal(a):
> Dear Zdenek
>
> On 5/4/23 15:17, Zdenek Kabelac wrote:
>>
>>> lvchange -ay qubes_dom0/pool00_tmeta
>>
>>
>> Looking at your 'lvs -a' output - you should be able to get this one
>> active.
>>
>>
>> You will need another LV to write fixed metadata into
>>
>> # lvcreate -L128M -n newlv? qubes_dom0
>
> He was yelling at me:
>
> # lvcreate -L128M -n newlv? qubes_dom0
>
> ? WARNING: Sum of all thin volume sizes (<1.62 TiB) exceeds the size of
> thin pools and the size of whole volume group (238.27 GiB).
> ? WARNING: You have not turned on protection against thin pools running
> out of space.
> ? WARNING: Set activation/thin_pool_autoextend_threshold below 100 to
> trigger automatic extension of thin pools before they get full.
> ? Logical volume "newlv" created.
>
> But since he seemed to live with it, I gave it a try and continued. So I
> activated tmeta by
>
> lvchange -ay qubes_dom0/pool00_tmeta
>
>>
>> Then you run
>>
>> # thin_repair -i /dev/qubes_dom0/pool00_tmeta -o /dev/qubes_dom0/newlv
>>
> # thin_repair?? -i /dev/qubes_dom0/pool00_tmeta? -o /dev/qubes_dom0/newlv
> terminate called after throwing an instance of 'std::runtime_error'
> ? what():? transaction_manager::new_block() couldn't allocate new block
> Aborted


Yeah this will need? Ming's investigation with raw binary date from your 
metadata volume

(see my 2nd. mail)

Regards


Zdenek




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-05-05 15:07             ` Zdenek Kabelac
@ 2023-05-05 16:25               ` Ming Hung Tsai
  2023-05-11  7:39               ` haaber
  1 sibling, 0 replies; 21+ messages in thread
From: Ming Hung Tsai @ 2023-05-05 16:25 UTC (permalink / raw)
  To: lvm-devel

RHEL & Fedora RPMs already have the thin_metadata_pack/unpack tools
integrated. Other distros may offer the tools if they had updated the
package to v1.0.x

On Fri, May 5, 2023 at 11:08?PM Zdenek Kabelac

> Hi
>
> We need the exact binary copy of _tmeta  LV  - thus just use
>
> dd if=/dev/qubes_dom0/pool00_tmeta  /tmp/tmeta_copy  bs=512K
> bzip2 /tmp/tmeta_copy
>
> With this data - also provide full lvm2 metadata for this VG
> (should be as a file in  /etc/lvm/backup  - or you could run
> just vgcfgbackup)
>
> thin_dump is already 'processed' result - so not usable for futher
> investigation.
>
> (Although it might interesting idea to add some 'option' to actually
> provide
> the above 'binary backup' is  built-in feature of this tool for bug
> reporting...)
>
>
> Regards
>
> Zdenek
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/lvm-devel/attachments/20230506/fd287b37/attachment.htm>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-05-05 15:07             ` Zdenek Kabelac
  2023-05-05 16:25               ` Ming Hung Tsai
@ 2023-05-11  7:39               ` haaber
  2023-05-12  3:29                 ` Ming Hung Tsai
  2023-05-17 15:17                 ` Ming Hung Tsai
  1 sibling, 2 replies; 21+ messages in thread
From: haaber @ 2023-05-11  7:39 UTC (permalink / raw)
  To: lvm-devel

Dear all,

We need the exact binary copy of _tmeta? LV? - thus just use
>
> dd if=/dev/qubes_dom0/pool00_tmeta? /tmp/tmeta_copy? bs=512K
> bzip2 /tmp/tmeta_copy
>
the output is here: https://we.tl/t-AEmlc5CYeH

> With this data - also provide full lvm2 metadata for this VG
> (should be as a file in? /etc/lvm/backup? - or you could run
> just vgcfgbackup)
>
I attached that one directly. Thank you very much!

best, Bernhard
-------------- next part --------------
A non-text attachment was scrubbed...
Name: qubes_dom0.bz2
Type: application/octet-stream
Size: 8156 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/lvm-devel/attachments/20230511/aab57f85/attachment-0001.obj>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-05-11  7:39               ` haaber
@ 2023-05-12  3:29                 ` Ming Hung Tsai
  2023-05-12 18:05                   ` haaber
  2023-05-17 15:17                 ` Ming Hung Tsai
  1 sibling, 1 reply; 21+ messages in thread
From: Ming Hung Tsai @ 2023-05-12  3:29 UTC (permalink / raw)
  To: lvm-devel

Hi,

There's one corrupted leaf node in device #20081, which might possibly be
caused by the hardware failure and that stops thin_dump or lvconvert from
working. Running thin_check would show you the details (I'm using the
v1.0.5):

TRANSACTION_ID=45452
METADATA_FREE_BLOCKS=11827
1 nodes in data mapping tree contain errors
0 io errors, 1 checksum errors
Thin device 20081 has 1 error nodes and is missing 22664 mappings, while
expected 296263
Check of mappings failed

The issue is repairable by rolling back to the previous transaction. I'm
going to patch the program to make it easier to use. It should be ready
next week, and you can try to learn how to build the Rust version for now.

1. Install the Rust toolchain via the rustup script (https://rustup.rs/)
2. Clone the thin-provisioning-tools.git repo, then build it (cargo build
--release)
3. Try the built pdata_tools binary (placed under ./target/release/)


Ming-Hung Tsai

On Thu, May 11, 2023 at 3:39?PM haaber <haaber@web.de> wrote:

> Dear all,
>
> We need the exact binary copy of _tmeta  LV  - thus just use
> >
> > dd if=/dev/qubes_dom0/pool00_tmeta  /tmp/tmeta_copy  bs=512K
> > bzip2 /tmp/tmeta_copy
> >
> the output is here: https://we.tl/t-AEmlc5CYeH
>
> > With this data - also provide full lvm2 metadata for this VG
> > (should be as a file in  /etc/lvm/backup  - or you could run
> > just vgcfgbackup)
> >
> I attached that one directly. Thank you very much!
>
> best, Bernhard
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/lvm-devel/attachments/20230512/8865d41b/attachment.htm>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-05-12  3:29                 ` Ming Hung Tsai
@ 2023-05-12 18:05                   ` haaber
  2023-05-13  3:20                     ` Ming Hung Tsai
  0 siblings, 1 reply; 21+ messages in thread
From: haaber @ 2023-05-12 18:05 UTC (permalink / raw)
  To: lvm-devel

Dear Ming-Hung,


>
> There's one corrupted leaf node in device #20081, which might possibly
> be caused by the hardware failure and that stops thin_dump or
> lvconvert from working. Running thin_check would show you the details
> (I'm using the v1.0.5):
>
> TRANSACTION_ID=45452
> METADATA_FREE_BLOCKS=11827
> 1 nodes in data mapping tree contain errors
> 0 io errors, 1 checksum errors
> Thin device 20081 has 1 error nodes and is missing 22664 mappings,
> while expected 296263
> Check of mappings failed
>
> The issue is repairable by rolling back to the previous transaction.
> I'm going to patch the program to make it easier to use. It should be
> ready next week, and you can try to learn how to build the Rust
> version for now.
>
> 1. Install the Rust toolchain via the rustup script (https://rustup.rs/)
> 2. Clone the thin-provisioning-tools.git repo, then build it (cargo
> build --release)
> 3. Try the built pdata_tools binary (placed under ./target/release/)
>
thank you for this inspection! ? I now have hope again to recover my data :)

Silly question: I? cloned
https://github.com/jthornber/thin-provisioning-tools and installed that
way successfully 1.0.4 but that is not the 1.0.5 branch you talked
about. Could you point the right 1.0.5 git, please?

thank you, Bernhard




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-05-12 18:05                   ` haaber
@ 2023-05-13  3:20                     ` Ming Hung Tsai
  0 siblings, 0 replies; 21+ messages in thread
From: Ming Hung Tsai @ 2023-05-13  3:20 UTC (permalink / raw)
  To: lvm-devel

You're right, that's a typo, and now it's on 1.0.4

On Sat, May 13, 2023 at 2:10?AM haaber <haaber@web.de> wrote:

> Dear Ming-Hung,
>
> thank you for this inspection!   I now have hope again to recover my data
> :)
>
> Silly question: I  cloned
> https://github.com/jthornber/thin-provisioning-tools and installed that
> way successfully 1.0.4 but that is not the 1.0.5 branch you talked
> about. Could you point the right 1.0.5 git, please?
>
> thank you, Bernhard
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/lvm-devel/attachments/20230513/4b14d25d/attachment.htm>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-05-11  7:39               ` haaber
  2023-05-12  3:29                 ` Ming Hung Tsai
@ 2023-05-17 15:17                 ` Ming Hung Tsai
  2023-05-20 20:34                   ` haaber
  1 sibling, 1 reply; 21+ messages in thread
From: Ming Hung Tsai @ 2023-05-17 15:17 UTC (permalink / raw)
  To: lvm-devel

Hi,

I've pushed the changes upstream. Now you should be able to repair the pool
via "lvconvert --repair" after installation.

On Thu, May 11, 2023 at 3:39?PM haaber <haaber@web.de> wrote:

> Dear all,
>
> We need the exact binary copy of _tmeta  LV  - thus just use
> >
> > dd if=/dev/qubes_dom0/pool00_tmeta  /tmp/tmeta_copy  bs=512K
> > bzip2 /tmp/tmeta_copy
> >
> the output is here: https://we.tl/t-AEmlc5CYeH
>
> > With this data - also provide full lvm2 metadata for this VG
> > (should be as a file in  /etc/lvm/backup  - or you could run
> > just vgcfgbackup)
> >
> I attached that one directly. Thank you very much!
>
> best, Bernhard
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/lvm-devel/attachments/20230517/3aabf4cc/attachment.htm>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-05-17 15:17                 ` Ming Hung Tsai
@ 2023-05-20 20:34                   ` haaber
  2023-05-22  7:40                     ` Ming Hung Tsai
  0 siblings, 1 reply; 21+ messages in thread
From: haaber @ 2023-05-20 20:34 UTC (permalink / raw)
  To: lvm-devel

Hi Ming

thank you so much. I compiled it, and did make install, but lvconvert is
still the old one. It should be possible to do the job with ?
pdata_tools? directly, right !?? I had formerly created a
"newlv" inside my qubes_dom0 pool, so I did?? run

./pdata_tools? thin_repair -i /dev/qubes_dom0/pool00_tmeta?? -o
/dev/qubes_dom0/newlv

that command worked 5 seconds, and came back without any notice, which
usually is good sign. I did not run it with a verbose flag, stupid me.

Can I now run some command that activates the pool with "newlv" as
metadata ? Or backup the old metadata file, and then copy "newlv"
metadata into the pool00_tmeta ? I am, of course,? afraid of destroying
it all, so close to? the end, so I better ask once more? :-)

best, Bernhard




On 5/17/23 17:17, Ming Hung Tsai wrote:
> Hi,
>
> I've pushed the changes upstream. Now you should be able to repair the
> pool via "lvconvert --repair" after installation.
>
> On Thu, May 11, 2023 at 3:39?PM haaber <haaber@web.de> wrote:
>
>     Dear all,
>
>     We need the exact binary copy of _tmeta? LV? - thus just use
>     >
>     > dd if=/dev/qubes_dom0/pool00_tmeta? /tmp/tmeta_copy bs=512K
>     > bzip2 /tmp/tmeta_copy
>     >
>     the output is here: https://we.tl/t-AEmlc5CYeH
>
>     > With this data - also provide full lvm2 metadata for this VG
>     > (should be as a file in? /etc/lvm/backup? - or you could run
>     > just vgcfgbackup)
>     >
>     I attached that one directly. Thank you very much!
>
>     best, Bernhard
>
>
> --
> lvm-devel mailing list
> lvm-devel at redhat.com
> https://listman.redhat.com/mailman/listinfo/lvm-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/lvm-devel/attachments/20230520/47f3e0ff/attachment.htm>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-05-20 20:34                   ` haaber
@ 2023-05-22  7:40                     ` Ming Hung Tsai
  2023-05-23 15:24                       ` [SOLVED] " haaber
  0 siblings, 1 reply; 21+ messages in thread
From: Ming Hung Tsai @ 2023-05-22  7:40 UTC (permalink / raw)
  To: lvm-devel

Hi,

There's a debug option `-v` for thin_dump/thin_repair to show verbose logs
including repairing details. In your case, you should see one compatible
root pair found in your metadata:

```
compatible roots (1):
(1150, 7643)
```

Once you had thin_repair'ed the metadata, you could swap the repaired one
into the pool by using lvconvert:

`lvconvert qubes_dom0/pool00 --swapmetadata --poolmetadata qubes_dom0/newlv`

The two volumes "pool00_tmeta" and "newlv" then will have their names
swapped, i.e., "newlv" becomes "pool00_tmeta" and the original
"pool00_tmeta" becomes "newlv", so you have the backup.


On Mon, May 22, 2023 at 2:57?PM haaber <haaber@web.de> wrote:

> Hi Ming
>
> thank you so much. I compiled it, and did make install, but lvconvert is
> still the old one. It should be possible to do the job with   pdata_tools
> directly, right !?  I had formerly created a
> "newlv" inside my qubes_dom0 pool, so I did   run
>
> ./pdata_tools  thin_repair -i /dev/qubes_dom0/pool00_tmeta   -o
> /dev/qubes_dom0/newlv
>
> that command worked 5 seconds, and came back without any notice, which
> usually is good sign. I did not run it with a verbose flag, stupid me.
>
> Can I now run some command that activates the pool with "newlv" as
> metadata ? Or backup the old metadata file, and then copy "newlv" metadata
> into the pool00_tmeta ? I am, of course,  afraid of destroying it all, so
> close to  the end, so I better ask once more  :-)
>
> best, Bernhard
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/lvm-devel/attachments/20230522/3f4a66ed/attachment.htm>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [SOLVED] Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure
  2023-05-22  7:40                     ` Ming Hung Tsai
@ 2023-05-23 15:24                       ` haaber
  0 siblings, 0 replies; 21+ messages in thread
From: haaber @ 2023-05-23 15:24 UTC (permalink / raw)
  To: lvm-devel

Dear Ming, Zdenek and others,

On 5/22/23 09:40, Ming Hung Tsai wrote:
> lvconvert qubes_dom0/pool00 --swapmetadata --poolmetadata qubes_dom0/newlv

issue solved, thank you SO MUCH! Just copying all data to a new drive :)

best, Bernhard



^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2023-05-23 15:24 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-25 13:49 Data recovery -- thin provisioned LVM metadata (?) problem after hardware failure haaber
2023-04-26 11:10 ` Zdenek Kabelac
2023-04-26 13:12   ` haaber
2023-04-27  9:29     ` Zdenek Kabelac
2023-05-03 16:48       ` haaber
2023-05-04 13:17         ` Zdenek Kabelac
2023-05-04 16:31           ` haaber
2023-05-05 15:14             ` Zdenek Kabelac
2023-05-04 17:06           ` haaber
2023-05-05  9:42             ` Ming Hung Tsai
2023-05-05 15:07             ` Zdenek Kabelac
2023-05-05 16:25               ` Ming Hung Tsai
2023-05-11  7:39               ` haaber
2023-05-12  3:29                 ` Ming Hung Tsai
2023-05-12 18:05                   ` haaber
2023-05-13  3:20                     ` Ming Hung Tsai
2023-05-17 15:17                 ` Ming Hung Tsai
2023-05-20 20:34                   ` haaber
2023-05-22  7:40                     ` Ming Hung Tsai
2023-05-23 15:24                       ` [SOLVED] " haaber
2023-04-26 12:06 ` Ming Hung Tsai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).