From: Minchan Kim <minchan@kernel.org>
To: Matthew Wilcox <willy@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
<linux-kernel@vger.kernel.org>, <sergey.senozhatsky@gmail.com>,
<iamjoonsoo.kim@lge.com>, <ngupta@vflare.org>,
<zhouxianrong@huawei.com>, <zhouxiyu@huawei.com>,
<weidu.du@huawei.com>, <zhangshiming5@huawei.com>,
<Mi.Sophia.Wang@huawei.com>, <won.ho.park@huawei.com>
Subject: Re: memfill v2 now with ARM and x86 implementations
Date: Mon, 13 Mar 2017 14:17:50 +0900 [thread overview]
Message-ID: <20170313051750.GA18927@bbox> (raw)
In-Reply-To: <20170311145640.GB1860@bombadil.infradead.org>
Hi Matthew,
On Sat, Mar 11, 2017 at 06:56:40AM -0800, Matthew Wilcox wrote:
> On Mon, Feb 06, 2017 at 12:16:44AM +0900, Minchan Kim wrote:
> > +static inline void zram_fill_page(char *ptr, unsigned long len,
> > + unsigned long value)
> > +{
> > + int i;
> > + unsigned long *page = (unsigned long *)ptr;
> > +
> > + WARN_ON_ONCE(!IS_ALIGNED(len, sizeof(unsigned long)));
> > +
> > + if (likely(value == 0)) {
> > + memset(ptr, 0, len);
> > + } else {
> > + for (i = 0; i < len / sizeof(*page); i++)
> > + page[i] = value;
> > + }
> > +}
>
> I've hacked up memset32/memset64 for both ARM and x86 here:
>
> http://git.infradead.org/users/willy/linux-dax.git/shortlog/refs/heads/memfill
Thanks for the patch.
>
> Can you do some performance testing and see if it makes a difference?
I tested that zram is *full* with non-zero 100M dedupable data(i.e.,
it's a ideal case) on x86. With this, I see 7% enhancement.
perf stat -r 10 dd if=/dev/zram0 of=/dev/null
vanilla: 0.232050465 seconds time elapsed ( +- 0.51% )
memset_l: 0.217219387 seconds time elapsed ( +- 0.07% )
I doubt it makes such benefit in read workload which a small percent
non-zero dedup data(e.g., under 3%) but it makes code simple/perform
win.
Thanks.
>
> At this point, I'd probably ask for the first 5 patches in that git
> branch to be included, and leave out memfill and the shoddy testsuite.
>
> I haven't actually tested either asm implementation ... only the
> C fallback.
prev parent reply other threads:[~2017-03-13 5:18 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-05 15:16 [PATCH v4] zram: extend zero pages to same element pages Minchan Kim
2017-02-06 14:49 ` memfill Matthew Wilcox
2017-02-07 2:47 ` memfill zhouxianrong
2017-02-07 4:59 ` memfill Minchan Kim
2017-02-07 19:07 ` memfill James Bottomley
2017-02-08 18:04 ` memfill Matthew Wilcox
2017-02-08 21:01 ` memfill James Bottomley
2017-02-08 21:54 ` memfill Matthew Wilcox
2017-02-07 6:57 ` [PATCH v4] zram: extend zero pages to same element pages Minchan Kim
2017-02-07 9:40 ` memfill David Howells
2017-02-07 17:22 ` memfill Matthew Wilcox
2017-02-07 17:29 ` memfill David Howells
2017-02-07 19:03 ` memfill Matthew Wilcox
2017-03-11 14:56 ` memfill v2 now with ARM and x86 implementations Matthew Wilcox
2017-03-13 5:17 ` Minchan Kim [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170313051750.GA18927@bbox \
--to=minchan@kernel.org \
--cc=Mi.Sophia.Wang@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=ngupta@vflare.org \
--cc=sergey.senozhatsky@gmail.com \
--cc=weidu.du@huawei.com \
--cc=willy@infradead.org \
--cc=won.ho.park@huawei.com \
--cc=zhangshiming5@huawei.com \
--cc=zhouxianrong@huawei.com \
--cc=zhouxiyu@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).