Thanks for feedbak.

Think of lvmsync as a tool, which reads the block changes from the cow device.
<offset><chunk_size><data>...
Lets assume that I am able to recreate this cow format instantly back to the server,
and present this as a file with the name "cowfile" on the file system for simplicity.

Is it possible than in some way, to use this cowfile in someway to inform LVM about the location of the snapshot area, so that lvconvert --merge can be used to restore the data quicker, using this cowfile.

The cowfile will include all blocks for the logical volume.

Regards Tomas

Den tis 22 okt. 2019 kl 18:15 skrev Stuart D. Gathman <stuart@gathman.org>:
On Tue, 22 Oct 2019, Zdenek Kabelac wrote:

> Dne 22. 10. 19 v 17:29 Dalebjörk, Tomas napsal(a):
>> But, it would be better if the cow device could be recreated in a faster
>> way, mentioning that all blocks are present on an external device, so that
>> the LV volume can be restored much quicker using "lvconvert --merge"
>> command.

> I do not want to break your imagination here, but that is exactly the thing
> you can do with thin provisioning and thin_delta tool.

lvconvert --merge does a "rollback" to the point at which the snapshot
was taken.  The master LV already has current data.  What Tomas wants to
be able to do a "rollforward" from the point at which the snapshot was
taken.  He also wants to be able to put the cow volume on an
extern/remote medium, and add a snapshot using an already existing cow.

This way, restoring means copying the full volume from backup, creating
a snapshot using existing external cow, then lvconvert --merge
instantly logically applies the cow changes while updating the master
LV.

Pros:

"Old" snapshots are exactly as efficient as thin when there is exactly
one.  They only get inefficient with multiple snapshots.  On the other
hand, thin volumes are as inefficient as an old LV with one snapshot.
An old LV is as efficient, and as anti-fragile, as a partition.  Thin
volumes are much more flexible, but depend on much more fragile database
like meta-data.

For this reason, I always prefer "old" LVs when the functionality of
thin LVs are not actually needed.  I can even manually recover from
trashed meta data by editing it, as it is human readable text.

Updates to the external cow can be pipelined (but then properly
handling reads becomes non trivial - there are mature remote block
device implementations for linux that will do the job).

Cons:

For the external cow to be useful, updates to it must be *strictly*
serialized.  This is doable, but not as obvious or trivial as it might
seem at first glance.  (Remote block device software will take care
of this as well.)

The "rollforward" must be applied to the backup image of the snapshot.
If the admin gets it paired with the wrong backup, massive corruption
ensues.  This could be automated.  E.g. the full image backup and
external cow would have unique matching names.  Or the full image backup
could compute an md5 in parallel, which would be store with the cow.
But none of those tools currently exist.

--
              Stuart D. Gathman <stuart@gathman.org>
"Confutatis maledictis, flamis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.