From: Zdenek Kabelac <firstname.lastname@example.org>
To: Mitta Sai Chaithanya <email@example.com>,
LVM2 development <firstname.lastname@example.org>,
Pawan Sharma <email@example.com>,
Cc: Kapil Upadhayay <firstname.lastname@example.org>
Subject: Re: [linux-lvm] [EXTERNAL] Re: LVM2 : performance drop even after deleting the snapshot
Date: Mon, 17 Oct 2022 15:10:46 +0200 [thread overview]
Message-ID: <email@example.com> (raw)
Dne 14. 10. 22 v 21:31 Mitta Sai Chaithanya napsal(a):
> Hi Zdenek Kabelac,
> Thanks for your quick reply and suggestions.
> We conducted couple of tests on Ubuntu 22.04 and observed similar performance
> behavior post thin snapshot deletion without writing any data anywhere.
> *Commands used to create Thin LVM volume*:
> - lvcreate -L 480G --poolmetadataspare n --poolmetadatasize 16G
> --chunksize=64K --thinpool ThinDataLV ThinVolGrp
> - lvcreate -n ext4.ThinLV -V 100G --thinpool ThinDataLV ThinVolGrp
So now it's clear you are talking about thin snapshots - this is a very
different story going on here (as we normally use term "COW" volumes for thick
I'll consult more with thinp author - however it does look to me you are using
same device to store data & metadata.
This is always a highly sub-optimal solution - the metadata device is likely
best to be stored on fast (low latency) devices.
So my wild guess - you are possibly using rotational device backend to store
your thin-pools metadata volume and then your setups gets very sensitive on
the metadata fragmentation.
Thin-pool was designed to be used with SSD/NVMe for metadata which is way less
sensitive on seeking.
So when you 'create' snapshot - metadata gets updated - when you remove thin
snapshot - metadata gets again a lots of changes (especially when your origin
volume is already populated) - and fragmentation is inevitable and you are
getting high penalty of holding metadata device on the same drive as your data
So while there are some plans to improve some metadata logistic - I'd not
expect miracles on you particular setup - I'd highly recommend to plug-in some
SSD/NVMe storage for storing your thinpool metadata - this is the way to go
to get better 'benchmarking' numbers here.
For an improvement on your setup - try to seek larger chunk size values where
your data 'sharing' is still reasonably valuable - this depends on data-type
usage - but chunk size 256K might be possibly a good compromise (with disabled
zeroing - if you hunt for the best performance).
PS: later mails suggest you are using some 'MS Azure' devices?? - so please
redo your testing with your local hardware/storage - where you have precise
guarantees of storage drive performance - testing in the Cloud is random by
linux-lvm mailing list
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
next prev parent reply other threads:[~2022-10-17 13:11 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-12 17:12 [linux-lvm] LVM2 : performance drop even after deleting the snapshot Pawan Sharma
2022-10-13 6:53 ` Pawan Sharma
2022-10-13 10:50 ` Zdenek Kabelac
2022-10-14 19:31 ` [linux-lvm] [EXTERNAL] " Mitta Sai Chaithanya
2022-10-17 13:10 ` Zdenek Kabelac [this message]
2022-10-17 13:41 ` Erwin van Londen
2022-10-20 18:19 ` Zdenek Kabelac
2022-10-18 3:33 ` Pawan Sharma
2022-10-18 11:15 ` Zdenek Kabelac
2022-10-14 19:50 ` [linux-lvm] " Roger Heflin
2022-10-14 20:28 ` Roberto Fastec
2022-10-17 5:01 ` Kapil Upadhayay
2022-10-17 15:16 ` Demi Marie Obenour
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).