From: Christopher Lameter <cl@linux.com>
To: Jesper Dangaard Brouer <netdev@brouer.com>
Cc: Pekka Enberg <penberg@iki.fi>, Michal Hocko <mhocko@kernel.org>,
"Tobin C. Harding" <me@tobin.cc>,
Vlastimil Babka <vbabka@suse.cz>,
"Tobin C. Harding" <tobin@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>, Tejun Heo <tj@kernel.org>,
Qian Cai <cai@lca.pw>,
Linus Torvalds <torvalds@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Mel Gorman <mgorman@techsingularity.net>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
Alexander Duyck <alexander.duyck@gmail.com>
Subject: Re: [PATCH 0/1] mm: Remove the SLAB allocator
Date: Wed, 17 Apr 2019 13:27:43 +0000 [thread overview]
Message-ID: <0100016a2b7b515b-2a0a4fab-6c9d-4eeb-a0c8-d3fffbf64e55-000000@email.amazonses.com> (raw)
In-Reply-To: <20190417105018.78604ad8@carbon>
On Wed, 17 Apr 2019, Jesper Dangaard Brouer wrote:
> I do think SLUB have a number of pathological cases where SLAB is
> faster. If was significantly more difficult to get good bulk-free
> performance for SLUB. SLUB is only fast as long as objects belong to
> the same page. To get good bulk-free performance if objects are
> "mixed", I coded this[1] way-too-complex fast-path code to counter
> act this (joined work with Alex Duyck).
Right. SLUB usually compensates for that with superior allocation
performance.
> > It's, of course, worth thinking about other pathological cases too.
> > Workloads that cause large allocations is one. Workloads that cause lots
> > of slab cache shrinking is another.
>
> I also worry about long uptimes when SLUB objects/pages gets too
> fragmented... as I said SLUB is only efficient when objects are
> returned to the same page, while SLAB is not.
??? Why would SLUB pages get more fragmented? SLUB has fragmentation
prevention methods that SLAB does not have.
next prev parent reply other threads:[~2019-04-17 13:27 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-10 2:47 [PATCH 0/1] mm: Remove the SLAB allocator Tobin C. Harding
2019-04-10 2:47 ` [PATCH 1/1] mm: Remove " Tobin C. Harding
2019-04-10 8:02 ` [PATCH 0/1] mm: Remove the " Vlastimil Babka
2019-04-10 8:16 ` Tobin C. Harding
2019-04-11 7:55 ` Michal Hocko
2019-04-11 8:27 ` Pekka Enberg
2019-04-17 8:50 ` Jesper Dangaard Brouer
2019-04-17 13:27 ` Christopher Lameter [this message]
2019-04-17 13:38 ` Michal Hocko
2019-04-22 14:43 ` Jesper Dangaard Brouer
2019-04-11 8:44 ` Mel Gorman
2019-04-10 21:53 ` David Rientjes
2019-04-12 11:28 ` Mel Gorman
2019-04-17 3:52 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0100016a2b7b515b-2a0a4fab-6c9d-4eeb-a0c8-d3fffbf64e55-000000@email.amazonses.com \
--to=cl@linux.com \
--cc=akpm@linux-foundation.org \
--cc=alexander.duyck@gmail.com \
--cc=cai@lca.pw \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=me@tobin.cc \
--cc=mgorman@techsingularity.net \
--cc=mhocko@kernel.org \
--cc=netdev@brouer.com \
--cc=netdev@vger.kernel.org \
--cc=penberg@iki.fi \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=tj@kernel.org \
--cc=tobin@kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).