archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Performance penalty for 4k requests on thin provisioned volume
@ 2017-09-13 15:33 Dale Stephenson
  2017-09-13 20:19 ` Zdenek Kabelac
  0 siblings, 1 reply; 10+ messages in thread
From: Dale Stephenson @ 2017-09-13 15:33 UTC (permalink / raw)
  To: linux-lvm

Distribution: centos-release-7-3.1611.el7.centos.x86_64
Kernel: Linux 3.10.0-514.26.2.el7.x86_64
LVM: 2.02.166(2)-RHEL7 (2016-11-16)

Volume group consisted of an 8-drive SSD (500G drives) array, plus an additional SSD of the same size.  The array had 64 k stripes.
Thin pool had -Zn option and 512k chunksize (full stripe), size 3T with metadata volume 16G.  data was entirely on the 8-drive raid, metadata was entirely on the 9th drive.
Virtual volume “thin” was 300 GB.  I also filled it with dd so that it would be fully provisioned before the test.
Volume “thick” was also 300GB, just an ordinary volume also entirely on the 8-drive array.

Four tests were run directlyagainst each volume using fio-2.2.8, random read, random write, sequential read, sequential write.  Single thread, 4k blocksize, 90s run time.

The thin volume should much worse performance in this test:

Random read : io=51607MB, bw=587168KB/s, iops=146792, runt= 90001msec
Random write: io=46530MB, bw=529406KB/s, iops=132351, runt= 90001msec
Sequential read : io=176561MB, bw=1961.7MB/s, iops=10462, runt= 90006msec
Sequential write: io=162451MB, bw=1804.1MB/s, iops=9626, runt= 90006msec

Random read : io=88350MB, bw=981.68MB/s, iops=251303, runt= 90001msec
Random write: io=77905MB, bw=886372KB/s, iops=221592, runt= 90001msec
Sequential read : io=89095MB, bw=989.96MB/s, iops=253421, runt= 90001msec
Sequential write: io=77742MB, bw=884520KB/s, iops=221130, runt= 90001msec

As you can see, the number of iops for the thin-provisioned volume is dramatically less than for thick volumes.  With the default 64k chunk size this is also true.

Running the same fio test with 512k request size shows similar performance for thick and thin volumes, so the maximum potential throughput of the thin volume seems broadly similar to thin volumes.  But I’d like the maximum number of iops to at least be within shouting distance of thick volumes.  Is there anything that can be done to improve that with either this version of lvm/device-mapper or later versions?  I typically use xfs on top of volumes, and xfs is very fond of 4k requests.

Dale Stephenson

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-09-14 15:25 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-13 15:33 [linux-lvm] Performance penalty for 4k requests on thin provisioned volume Dale Stephenson
2017-09-13 20:19 ` Zdenek Kabelac
2017-09-13 22:39   ` Dale Stephenson
2017-09-14  9:00     ` Zdenek Kabelac
2017-09-14  9:37       ` Zdenek Kabelac
2017-09-14 10:52         ` Gionatan Danti
2017-09-14 10:57         ` Gionatan Danti
2017-09-14 11:13           ` Zdenek Kabelac
2017-09-14 14:32             ` Dale Stephenson
2017-09-14 15:25       ` Dale Stephenson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).