linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Gao Xiang <hsiangkao@redhat.com>
To: Chao Yu <yuchao0@huawei.com>
Cc: Eric Biggers <ebiggers@kernel.org>,
	jaegeuk@kernel.org, linux-kernel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net
Subject: Re: [f2fs-dev] [PATCH v6] f2fs: compress: support compress level
Date: Fri, 4 Dec 2020 17:10:57 +0800	[thread overview]
Message-ID: <20201204091057.GB2025226@xiangao.remote.csb> (raw)
In-Reply-To: <ffca7f07-653f-1270-72d4-e66ffc8a7473@huawei.com>

On Fri, Dec 04, 2020 at 04:50:14PM +0800, Chao Yu wrote:

...

> 
> > 
> > About the speed, I think which is also limited to storage device and other
> > conditions (I mean the CPU loading during the writeback might be different
> > between lz4 and lz4hc-9 due to many other bounds, e.g. UFS 3.0 seq
> > write is somewhat higher vs VM. lz4 may have higher bandwidth on high
> 
> Yeah, I guess my VM have been limited on its storage bandwidth, and its back-end
> could be low-end rotating disk...

Yeah, anyway that's in IO writeback path (no matter the time was working
on IO or CPU calcualation...)

> 
> > level devices since it seems some IO bound here... I guess but not sure,
> > since pure in-memory lz4 is fast according to lzbench / lz4 homepage.)
> > 
> > Anyway, it's up to f2fs folks if it's useful :) (the CR number is what
> > I expect though... I'm a bit of afraid the CPU runtime loading.)
> 
> I just have a glance at CPU usage numbers (my VM has 16 cores):
> lz4hc takes 11% in first half and downgrade to 6% at second half.
> lz4 takes 6% in whole process.
> 
> But that's not accruate...

There is some userspace lzbench [1] to benchmark lz4/lz4hc completely
in memory. So it's expected that lz4bench will consume all 100% CPU
with maximum bandwidth (but in-kernel lz4 version is lower though):

Intel Core i7-8700K
                Compression     Decompression   C/R
memcpy          10362 MB/s      10790 MB/s      100.00
lz4 1.9.2       737 MB/s        4448 MB/s       47.60
lz4hc 1.9.2 -9  33 MB/s         4378 MB/s       36.75

So adding more IO time (due to storage device difference) could make
CPU loading lower (also could make the whole process IO bound) but
the overall write bandwidth will be lower as well.

[1] https://github.com/inikep/lzbench

Thanks,
Gao Xiang

> 
> Thanks,


  reply	other threads:[~2020-12-04  9:12 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-03  6:17 [PATCH v6] f2fs: compress: support compress level Chao Yu
2020-12-03 19:32 ` [f2fs-dev] " Eric Biggers
2020-12-04  0:31   ` Gao Xiang
2020-12-04  1:56     ` Chao Yu
2020-12-04  2:06       ` Gao Xiang
2020-12-04  2:38         ` Chao Yu
2020-12-04  2:47           ` Gao Xiang
2020-12-04  3:11             ` Chao Yu
2020-12-04  3:21               ` Gao Xiang
2020-12-04  7:09     ` Chao Yu
2020-12-04  7:43       ` Gao Xiang
2020-12-04  8:50         ` Chao Yu
2020-12-04  9:10           ` Gao Xiang [this message]
2020-12-04  1:18   ` Chao Yu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201204091057.GB2025226@xiangao.remote.csb \
    --to=hsiangkao@redhat.com \
    --cc=ebiggers@kernel.org \
    --cc=jaegeuk@kernel.org \
    --cc=linux-f2fs-devel@lists.sourceforge.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=yuchao0@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).