From the looks of it the disk, as provisioned out of an Azure pool, is likely backed by an enterprise raid array. When you provision the pools with  discard_passdown the removal of the snapshot will also be pushed down to the underlying hypervisor or disk array. You would need to wait till that process is completed in order to make any comparisons. ThinVolGrp-ThinDataLV-tpool: 0 1006632960 thin-pool 1 4878/4145152 8325/7864320 - rw discard_passdown queue_if_no_space - 1024 As per man page --discards passdown|nopassdown|ignore Specifies how the device-mapper thin pool layer in the kernel should handle discards. ignore causes the thin pool to ignore discards. nopassdown causes the thin pool to process discards itself to allow reuse of unneeded extents in the thin pool. passdown causes the thin pool to process discards itself (like nopassdown) and pass the discards to the underlying device.  Try the same operation after changing the thin volume lvchange --discards nopassdown VG/ThinPoolLV -- Kind regards, Erwin van LondenEvL Consulting ABN 43 560 744 507 Mobile+61-434-325795Phone+61-7- 53213176Webhttp://erwinvanlonden.netConferencehttps://iene.3cx.com.au/meet/erwinvlwebmeet Web Talkhttps://iene.3cx.com.au/callus/#erwinvlwebphone On Mon, 2022-10-17 at 15:10 +0200, Zdenek Kabelac wrote: > Dne 14. 10. 22 v 21:31 Mitta Sai Chaithanya napsal(a): > > Hi Zdenek Kabelac, > >            Thanks for your quick reply and suggestions. > > > > We conducted couple of tests on Ubuntu 22.04 and observed similar > > performance > > behavior post thin snapshot deletion without writing any data > > anywhere. > > > > *Commands used to create Thin LVM volume*: > > - lvcreate  -L 480G --poolmetadataspare n --poolmetadatasize 16G > > --chunksize=64K --thinpool  ThinDataLV ThinVolGrp > > - lvcreate -n ext4.ThinLV -V 100G --thinpool ThinDataLV ThinVolGrp > > > Hi > > So now it's clear you are talking about thin snapshots - this is a > very > different story going on here (as we normally use term "COW" volumes > for thick > old snapshots) > > I'll consult more with thinp author - however it does look to me you > are using > same device to store  data & metadata. > > This is always a highly sub-optimal solution - the metadata device is > likely > best to be stored on fast (low latency) devices. > > So my wild guess - you are possibly using rotational device backend > to store > your  thin-pools metadata volume and then your setups gets very > sensitive on > the metadata fragmentation. > > Thin-pool was designed to be used with SSD/NVMe for metadata which is > way less > sensitive on seeking. > > So when you 'create' snapshot - metadata gets updated - when you > remove thin > snapshot - metadata gets again a lots of changes (especially when > your origin > volume is already populated) - and fragmentation is inevitable and > you are > getting high penalty of holding metadata device on the same drive as > your data > device. > > So while there are some plans to improve some metadata logistic - I'd > not > expect miracles on you particular setup - I'd highly recommend to > plug-in some >   SSD/NVMe storage for storing your thinpool metadata - this is the > way to go > to get better 'benchmarking' numbers here. > > For an improvement on your setup - try to seek larger chunk size > values where > your data 'sharing' is still reasonably valuable - this depends on > data-type > usage - but chunk size 256K might be possibly a good compromise (with > disabled > zeroing - if you hunt for the best performance). > > > Regards > > Zdenek > > PS: later mails suggest you are using some 'MS Azure' devices?? - so > please > redo your testing with your local hardware/storage - where you have > precise > guarantees of storage drive performance - testing in the Cloud is > random by > design.... > > _______________________________________________ > linux-lvm mailing list > linux-lvm@redhat.com > https://listman.redhat.com/mailman/listinfo/linux-lvm > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/