From: Yang Shi <yang.shi@linux.alibaba.com>
To: Shakeel Butt <shakeelb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>,
Matthew Wilcox <willy@infradead.org>,
Andrew Morton <akpm@linux-foundation.org>,
Linux MM <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [v2 PATCH 1/2] mm: swap: make page_evictable() inline
Date: Mon, 16 Mar 2020 19:05:58 -0700 [thread overview]
Message-ID: <d4036ddf-388d-bfeb-f36a-2ead20f5793f@linux.alibaba.com> (raw)
In-Reply-To: <CALvZod72O-9Qm5bvr2MWKPRiDV3oFCmujawr28DnsSdJx+PmjQ@mail.gmail.com>
On 3/16/20 4:46 PM, Shakeel Butt wrote:
> On Mon, Mar 16, 2020 at 3:24 PM Yang Shi <yang.shi@linux.alibaba.com> wrote:
>> When backporting commit 9c4e6b1a7027 ("mm, mlock, vmscan: no more
>> skipping pagevecs") to our 4.9 kernel, our test bench noticed around 10%
>> down with a couple of vm-scalability's test cases (lru-file-readonce,
>> lru-file-readtwice and lru-file-mmap-read). I didn't see that much down
>> on my VM (32c-64g-2nodes). It might be caused by the test configuration,
>> which is 32c-256g with NUMA disabled and the tests were run in root memcg,
>> so the tests actually stress only one inactive and active lru. It
>> sounds not very usual in mordern production environment.
>>
>> That commit did two major changes:
>> 1. Call page_evictable()
>> 2. Use smp_mb to force the PG_lru set visible
>>
>> It looks they contribute the most overhead. The page_evictable() is a
>> function which does function prologue and epilogue, and that was used by
>> page reclaim path only. However, lru add is a very hot path, so it
>> sounds better to make it inline. However, it calls page_mapping() which
>> is not inlined either, but the disassemble shows it doesn't do push and
>> pop operations and it sounds not very straightforward to inline it.
>>
>> Other than this, it sounds smp_mb() is not necessary for x86 since
>> SetPageLRU is atomic which enforces memory barrier already, replace it
>> with smp_mb__after_atomic() in the following patch.
>>
>> With the two fixes applied, the tests can get back around 5% on that
>> test bench and get back normal on my VM. Since the test bench
>> configuration is not that usual and I also saw around 6% up on the
>> latest upstream, so it sounds good enough IMHO.
>>
>> The below is test data (lru-file-readtwice throughput) against the v5.6-rc4:
>> mainline w/ inline fix
>> 150MB 154MB
>>
>> With this patch the throughput gets 2.67% up. The data with using
>> smp_mb__after_atomic() is showed in the following patch.
>>
>> Fixes: 9c4e6b1a7027 ("mm, mlock, vmscan: no more skipping pagevecs")
>> Cc: Shakeel Butt <shakeelb@google.com>
>> Cc: Vlastimil Babka <vbabka@suse.cz>
>> Cc: Matthew Wilcox <willy@infradead.org>
>> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
>
> So, I tested on a real machine with limiting the 'dd' on a single node
> and reading 100 GiB sparse file (less than a single node). I just ran
> a single instance to not cause the lru lock contention. The cmd I used
> is "dd if=file-100GiB of=/dev/null bs=4k". I ran the cmd 10 times with
> drop_caches in between and measured the time it took.
>
> Without patch: 56.64143 +- 0.672 sec
>
> With patches: 56.10 +- 0.21 sec
>
> Reviewed-and-Tested-by: Shakeel Butt <shakeelb@google.com>
Thanks Shakeel. It'd better to add your test result in the commit log too.
next prev parent reply other threads:[~2020-03-17 2:07 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-16 22:24 [v2 PATCH 1/2] mm: swap: make page_evictable() inline Yang Shi
2020-03-16 22:24 ` [v2 PATCH 2/2] mm: swap: use smp_mb__after_atomic() to order LRU bit set Yang Shi
2020-03-16 23:47 ` Shakeel Butt
2020-03-16 23:46 ` [v2 PATCH 1/2] mm: swap: make page_evictable() inline Shakeel Butt
2020-03-17 2:05 ` Yang Shi [this message]
2020-03-17 3:00 ` kbuild test robot
2020-03-17 3:24 ` Yang Shi
2020-03-17 3:15 ` kbuild test robot
2020-03-17 8:59 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d4036ddf-388d-bfeb-f36a-2ead20f5793f@linux.alibaba.com \
--to=yang.shi@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=shakeelb@google.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).