From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D95CC35242 for ; Mon, 17 Feb 2020 05:32:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E0D1920718 for ; Mon, 17 Feb 2020 05:32:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725958AbgBQFcy (ORCPT ); Mon, 17 Feb 2020 00:32:54 -0500 Received: from len.romanrm.net ([91.121.86.59]:54424 "EHLO len.romanrm.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725873AbgBQFcy (ORCPT ); Mon, 17 Feb 2020 00:32:54 -0500 X-Greylist: delayed 400 seconds by postgrey-1.27 at vger.kernel.org; Mon, 17 Feb 2020 00:32:53 EST Received: from natsu (natsu.40.romanrm.net [IPv6:fd39:aa:c499:6515:e99e:8f1b:cfc9:ccb8]) by len.romanrm.net (Postfix) with SMTP id 7A69E40503; Mon, 17 Feb 2020 05:26:11 +0000 (UTC) Date: Mon, 17 Feb 2020 10:26:10 +0500 From: Roman Mamedov To: Chris Murphy Cc: Linux FS Devel , Btrfs BTRFS Subject: Re: dev loop ~23% slower? Message-ID: <20200217102610.6e92da97@natsu> In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Sun, 16 Feb 2020 20:18:05 -0700 Chris Murphy wrote: > I don't think file system over accounts for much more than a couple > percent of this, so I'm curious where the slow down might be > happening? The "hosting" Btrfs file system is not busy at all at the > time of the loop mounted filesystem's scrub. I did issue 'echo 3 > > /proc/sys/vm/drop_caches' before the loop mount image being scrubbed, > otherwise I get ~1.72GiB/s scrubs which exceeds the performance of the > SSD (which is in the realm of 550MiB/s max) Try comparing just simple dd read speed of that FS image, compared to dd speed from the underlying device of the host filesystem. With scrubs you might be testing the same metric, but it's a rather elaborate way to do so -- and also to exclude any influence from the loop device driver, or at least to figure out the extent of it. For me on 5.4.20: dd if=zerofile iflag=direct of=/dev/null bs=1M 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 3.68213 s, 583 MB/s dd if=/dev/mapper/cryptohome iflag=direct of=/dev/null bs=1M count=2048 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 3.12917 s, 686 MB/s Personally I am not really surprised by this difference, of course going through a filesystem is going to introduce overhead when compared to reading directly from the block device that it sits on. Although briefly testing the same on XFS, it does seem to have less of it, about 6% instead of 15% here. -- With respect, Roman