linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* dev loop ~23% slower?
@ 2020-02-17  3:18 Chris Murphy
  2020-02-17  5:26 ` Roman Mamedov
  0 siblings, 1 reply; 2+ messages in thread
From: Chris Murphy @ 2020-02-17  3:18 UTC (permalink / raw)
  To: Linux FS Devel, Btrfs BTRFS

Hi,
I see an approximately 23% reduction in performance through a loop
device. Is it expected? This is kernel 5.5.3.

The setup is SSD, plain partitions, no LVM or encryption, Btrfs. This
scrub performance is typical.

$ sudo btrfs scrub status /
UUID:             b8e290d5-1dc5-429f-8201-10ca5b2c0b95
Scrub started:    Sun Feb 16 19:39:01 2020
Status:           finished
Duration:         0:00:54
Total to scrub:   28.00GiB
Rate:             531.06MiB/s
Error summary:    no errors found
[chris@fmac ~]$

On this file system is a sparse file, chattr +C is set, and it's
attached to /dev/loop0 and mounted at /mnt.

$ sudo btrfs scrub status /mnt
UUID:             63a7e2b9-6a5e-4e94-9cc9-f90d01de7541
Scrub started:    Sun Feb 16 20:06:51 2020
Status:           finished
Duration:         0:00:13
Total to scrub:   5.15GiB
Rate:             405.79MiB/s
Error summary:    no errors found
$

I don't think file system over accounts for much more than a couple
percent of this, so I'm curious where the slow down might be
happening? The "hosting" Btrfs file system is not busy at all at the
time of the loop mounted filesystem's scrub. I did issue 'echo 3 >
/proc/sys/vm/drop_caches' before the loop mount image being scrubbed,
otherwise I get ~1.72GiB/s scrubs which exceeds the performance of the
SSD (which is in the realm of 550MiB/s max)


Thanks,

-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: dev loop ~23% slower?
  2020-02-17  3:18 dev loop ~23% slower? Chris Murphy
@ 2020-02-17  5:26 ` Roman Mamedov
  0 siblings, 0 replies; 2+ messages in thread
From: Roman Mamedov @ 2020-02-17  5:26 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Linux FS Devel, Btrfs BTRFS

On Sun, 16 Feb 2020 20:18:05 -0700
Chris Murphy <lists@colorremedies.com> wrote:

> I don't think file system over accounts for much more than a couple
> percent of this, so I'm curious where the slow down might be
> happening? The "hosting" Btrfs file system is not busy at all at the
> time of the loop mounted filesystem's scrub. I did issue 'echo 3 >
> /proc/sys/vm/drop_caches' before the loop mount image being scrubbed,
> otherwise I get ~1.72GiB/s scrubs which exceeds the performance of the
> SSD (which is in the realm of 550MiB/s max)

Try comparing just simple dd read speed of that FS image, compared to dd speed
from the underlying device of the host filesystem. With scrubs you might be
testing the same metric, but it's a rather elaborate way to do so -- and also
to exclude any influence from the loop device driver, or at least to figure
out the extent of it.

For me on 5.4.20:

dd if=zerofile iflag=direct of=/dev/null bs=1M
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 3.68213 s, 583 MB/s

dd if=/dev/mapper/cryptohome iflag=direct of=/dev/null bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 3.12917 s, 686 MB/s

Personally I am not really surprised by this difference, of course going
through a filesystem is going to introduce overhead when compared to reading
directly from the block device that it sits on. Although briefly testing the
same on XFS, it does seem to have less of it, about 6% instead of 15% here.

-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-02-17  5:32 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-17  3:18 dev loop ~23% slower? Chris Murphy
2020-02-17  5:26 ` Roman Mamedov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).