On Wed, Jun 15, 2022 at 03:42:17PM +0800, Zhiyong Ye wrote: > > > 在 6/14/22 10:54 PM, Gionatan Danti 写道: > > Il 2022-06-14 15:29 Zhiyong Ye ha scritto: > > > The reason for this may be that when the volume creates a snapshot, > > > each write to an existing block will cause a COW (Copy-on-write), and > > > the COW is a copy of the entire data block in chunksize, for example, > > > when the chunksize is 64k, even if only 4k of data is written, the > > > entire 64k data block will be copied. I'm not sure if I understand > > > this correctly. > > > > Yes, in your case, the added copies are lowering total available IOPs. > > But note how the decrease is sub-linear (from 64K to 1M you have a 16x > > increase in chunk size but "only" a 10x hit in IOPs): this is due to the > > lowered metadata overhead. > > It seems that the consumption of COW copies when sending 4k requests is much > greater than the loss from metadata. > > > A last try: if you can, please regenerate your thin volume with 64K > > chunks and set fio to execute 64K requests. Lets see if LVM is at least > > smart enough to avoid coping a to-be-completely-overwritten chunks. > > I regenerated the thin volume with the chunksize of 64K and the random write > performance data tested with fio 64k requests is as follows: > case iops > thin lv 9381 > snapshotted thin lv 8307 That seems reasonable. My conclusion is that dm-thin (which is what LVM uses) is not a good fit for workloads with a lot of small random writes and frequent snapshots, due to the 64k minimum chunk size. This also explains why dm-thin does not allow smaller blocks: not only would it only support very small thin pools, it would also have massive metadata write overhead. Hopefully dm-thin v2 will improve the situation. -- Sincerely, Demi Marie Obenour (she/her/hers) Invisible Things Lab