From: Vlastimil Babka <vbabka@suse.cz>
To: Michal Hocko <mhocko@suse.com>,
Vincent Guittot <vincent.guittot@linaro.org>
Cc: Christoph Lameter <cl@linux.com>,
Bharata B Rao <bharata@linux.ibm.com>,
linux-kernel <linux-kernel@vger.kernel.org>,
linux-mm@kvack.org, David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Andrew Morton <akpm@linux-foundation.org>,
guro@fb.com, Shakeel Butt <shakeelb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
aneesh.kumar@linux.ibm.com, Jann Horn <jannh@google.com>
Subject: Re: [RFC PATCH v0] mm/slub: Let number of online CPUs determine the slub page order
Date: Wed, 27 Jan 2021 14:38:29 +0100 [thread overview]
Message-ID: <62d61572-830b-a660-8049-3826128343c5@suse.cz> (raw)
In-Reply-To: <20210126135918.GQ827@dhcp22.suse.cz>
On 1/26/21 2:59 PM, Michal Hocko wrote:
>>
>> On 8 CPUs, I run hackbench with up to 16 groups which means 16*40
>> threads. But I raise up to 256 groups, which means 256*40 threads, on
>> the 224 CPUs system. In fact, hackbench -g 1 (with 1 group) doesn't
>> regress on the 224 CPUs system. The next test with 4 groups starts
>> to regress by -7%. But the next one: hackbench -g 16 regresses by 187%
>> (duration is almost 3 times longer). It seems reasonable to assume
>> that the number of running threads and resources scale with the number
>> of CPUs because we want to run more stuff.
>
> OK, I do understand that more jobs scale with the number of CPUs but I
> would also expect that higher order pages are generally more expensive
> to get so this is not really a clear cut especially under some more
> demand on the memory where allocations are smooth. So the question
> really is whether this is not just optimizing for artificial conditions.
FWIW, I enabled CONFIG_SLUB_STATS and run "hackbench -l 16000 -g 16" in a
(small) VM, and checked tools/vm/slabinfo -DA as per the config option's help,
and it seems to be these 2 caches that are stressed:
Name Objects Alloc Free %Fast Fallb O CmpX UL
kmalloc-512 812 25655535 25654908 71 1 0 0 20082 0
skbuff_head_cache 304 25602632 25602632 84 1 0 0 11241 0
I guess larger pages mean more batched per-cpu allocations without going to the
shared structures or even page allocator. But 3 times duration is still surprising
to me. I'll dig more.
next prev parent reply other threads:[~2021-01-27 13:41 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-18 8:27 [RFC PATCH v0] mm/slub: Let number of online CPUs determine the slub page order Bharata B Rao
2020-11-18 11:25 ` Vlastimil Babka
2020-11-18 19:34 ` Roman Gushchin
2020-11-18 19:53 ` David Rientjes
2020-11-18 19:53 ` David Rientjes
2021-01-20 17:36 ` Vincent Guittot
2021-01-20 17:36 ` Vincent Guittot
2021-01-21 5:30 ` Bharata B Rao
2021-01-21 9:09 ` Vincent Guittot
2021-01-21 9:09 ` Vincent Guittot
2021-01-21 10:01 ` Christoph Lameter
2021-01-21 10:01 ` Christoph Lameter
2021-01-21 10:48 ` Vincent Guittot
2021-01-21 10:48 ` Vincent Guittot
2021-01-21 18:19 ` Vlastimil Babka
2021-01-22 8:03 ` Vincent Guittot
2021-01-22 8:03 ` Vincent Guittot
2021-01-22 12:03 ` Vlastimil Babka
2021-01-22 13:16 ` Vincent Guittot
2021-01-22 13:16 ` Vincent Guittot
2021-01-23 5:16 ` Bharata B Rao
2021-01-23 12:32 ` Vincent Guittot
2021-01-23 12:32 ` Vincent Guittot
2021-01-25 11:20 ` Vlastimil Babka
2021-01-26 23:03 ` Will Deacon
2021-01-27 9:10 ` Christoph Lameter
2021-01-27 9:10 ` Christoph Lameter
2021-01-27 11:04 ` Vlastimil Babka
2021-02-03 11:10 ` Bharata B Rao
2021-02-04 7:32 ` Vincent Guittot
2021-02-04 7:32 ` Vincent Guittot
2021-02-04 9:07 ` Christoph Lameter
2021-02-04 9:07 ` Christoph Lameter
2021-02-04 9:33 ` Vlastimil Babka
2021-02-08 13:41 ` [PATCH] mm, slub: better heuristic for number of cpus when calculating slab order Vlastimil Babka
2021-02-08 14:54 ` Vincent Guittot
2021-02-08 14:54 ` Vincent Guittot
2021-02-10 14:07 ` Mel Gorman
2021-01-22 13:05 ` [RFC PATCH v0] mm/slub: Let number of online CPUs determine the slub page order Jann Horn
2021-01-22 13:05 ` Jann Horn
2021-01-22 13:09 ` Jann Horn
2021-01-22 13:09 ` Jann Horn
2021-01-22 15:27 ` Vlastimil Babka
2021-01-25 4:28 ` Bharata B Rao
2021-01-26 8:52 ` Michal Hocko
2021-01-26 13:38 ` Vincent Guittot
2021-01-26 13:38 ` Vincent Guittot
2021-01-26 13:59 ` Michal Hocko
2021-01-27 13:38 ` Vlastimil Babka [this message]
2021-01-28 13:45 ` Mel Gorman
2021-01-28 13:57 ` Michal Hocko
2021-01-28 14:42 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=62d61572-830b-a660-8049-3826128343c5@suse.cz \
--to=vbabka@suse.cz \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=bharata@linux.ibm.com \
--cc=cl@linux.com \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=jannh@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=rientjes@google.com \
--cc=shakeelb@google.com \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.