Hi

I have some additional questions related to this.
regarding this statement:
 While the merge is in progress, reads or writes to the origin appear as they were directed to the snapshot being merged. ”

What exactly does that mean?

Will that mean that before changes are being placed on the origin device, it has to first:
read the data from the snapshot back to origin, copy the data back from origin to the snapshot, and than after that allow changes to happen?
if that is the case, does it keep track of that this block should not be copied again?

and will the ongoing merge priorities this block before the other background copying?

how about read operations ?
will the requested read operations on the origin volume be prioritized before the copying of snapshot data?

I didn’t find much information about this, hence why I ask here

assuming that someone has executed: lvconvert - - merge -b snapshot

thanks for the feedback 

Skickat från min iPhone

25 okt. 2019 kl. 18:31 skrev Tomas Dalebjörk <tomas.dalebjork@gmail.com>:


Wow!

Impressing.
This will make history!

If this is possible, than we are able to implement a solution, which can do:
- progressive block level incremental forever (always incremental on block level : this already exist)
- instant recovery to point in time (using the mentioned methods you just described)

For example, lets say that a client wants to restore a file system, or a logical volume to how it looked a like yesterday.
Eventhough there are no snapshot, nor any data.
Than the client (with some coding); can start from an empty volume, and re-attach a cow device, and convert that using lvconvert --merge, so that the copying can be done in background using the backup server.

If you forget about "how we will re-create the cow device"; and just focusing on the LVM ideas of re-attaching a cow device.
Do you think that I have understood it correctly?


Den tors 24 okt. 2019 kl 18:01 skrev Zdenek Kabelac <zkabelac@redhat.com>:
Dne 23. 10. 19 v 13:24 Tomas Dalebjörk napsal(a):
> I have tested FusionIO together with old thick snapshots.
> I created the thick snapshot on a separate old traditional SATA drive, just to
> check if that could be used as a snapshot target for high performance disks;
> like a Fusion IO card.
> For those who doesn't know about FusionIO; they can deal with 150-250,000 IOPS.
>
> And to be honest, I couldn't bottle neck the SATA disk I used as a thick
> snapshot target.
> The reason for why is simple:
> - thick snapshots uses sequential write techniques
>
> If I would have been using thin snapshots, than the writes would most likely
> be more randomized on disk, which would have required more spindles to coop
> with this.
>
> Anyhow;
> I am still eager to hear how to use an external device to import snapshots.
> And when I say "import"; I am not talking about copyback, more to use to read
> data from.

Format of 'on-disk' snapshot metadata for old snapshot is trivial - being some
header + pairs of dataoffset-TO-FROM -  I think googling will reveal couple
python tools playing with it.

You can add pre-created COW image to LV  with  lvconvert --snapshot
and to avoid 'zeroing' metadata use option -Zn
(BTW in the same way you can detach snapshot from LV with --splitsnapshot so
you can look how the metadata looks like...)

Although it's pretty unusual why would anyone create first the COW image with
all the special layout and then merge it to LV - instead of directly
merging...   There is only the 'little' advantage of minimizing 'offline' time
of such device   (and it's the reason why --split exists).

Regards

Zdenek