All of lore.kernel.org
 help / color / mirror / Atom feed
From: Nick Piggin <npiggin@suse.de>
To: David Rientjes <rientjes@google.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>,
	Ingo Molnar <mingo@elte.hu>, Janboe Ye <yuan-bo.ye@motorola.com>,
	linux-kernel@vger.kernel.org, vegard.nossum@gmail.com,
	fche@redhat.com, cl@linux-foundation.org
Subject: Re: [RFC][PATCH] Check write to slab memory which freed already using mudflap
Date: Mon, 20 Jul 2009 10:32:12 +0200	[thread overview]
Message-ID: <20090720083212.GA7070@wotan.suse.de> (raw)
In-Reply-To: <alpine.DEB.2.00.0907151309540.22582@chino.kir.corp.google.com>

On Wed, Jul 15, 2009 at 01:19:27PM -0700, David Rientjes wrote:
> On Wed, 15 Jul 2009, Nick Piggin wrote:
> 
> > > It's my opinion that slab is on its way out when there's no benchmark that 
> > > shows it is superior by any significant amount.  If that happens (and if 
> > > its successor is slub, slqb, or a yet to be implemented allocator), we can 
> > > probably start a discussion on what's in and what's out at that time.
> > 
> > How are you running your netperf test? Over localhost or remotely?
> > It is a 16 core system? NUMA?
> > 
> 
> I ran it remotely using two machines on the same rack.  Both were four 
> quad-core UMA systems.

OK, I feared as much ;) I concede localhost testing isn't a good
idea when doing networking tests (especially where cache hotness
of things is concerned). So my numbers don't mean too much.
I'll try to rig up some remote tests (don't have very good network
setups here). Are you just using single 1GbE, or more?

 
> > It seems pretty variable when I run it here, although there seems
> > to be a pretty clear upper bound on performance, where a lot of the
> > results land around (then others go anywhere down to less than half
> > that performance).
> > 
> 
> My results from my slub partial slab thrashing patchset comparing slab and 
> slub were with a variety of different thread counts, each a multiple of 
> the number of cores.  The most notable slub regression always appeared in 
> the higher thread counts with this script:

Yes, I was using your script you put in the earlier post. I did
run varying numbers of threads, but so much noise in the results
that I just cut it to a single (high) thread count to get enough
runs.

 
> #!/bin/bash
> 
> TIME=60				# seconds
> HOSTNAME=hostname.goes.here	# netserver
> 
> NR_CPUS=$(grep ^processor /proc/cpuinfo | wc -l)
> echo NR_CPUS=$NR_CPUS
> 
> run_netperf() {
> 	for i in $(seq 1 $1); do
> 		netperf -H $HOSTNAME -t TCP_RR -l $TIME &
> 	done
> }
> 
> ITERATIONS=0
> while [ $ITERATIONS -lt 10 ]; do
> 	RATE=0
> 	ITERATIONS=$[$ITERATIONS + 1]	
> 	THREADS=$[$NR_CPUS * $ITERATIONS]
> 	RESULTS=$(run_netperf $THREADS | grep -v '[a-zA-Z]' | awk '{ print $6 }')
> 
> 	for j in $RESULTS; do
> 		RATE=$[$RATE + ${j/.*}]
> 	done
> 	echo threads=$THREADS rate=$RATE
> done
> 
> > Anyway, tried to get an idea of performance on my 8 core NUMA system,
> > over localhost, and just at 64 threads. Ran the test 60 times for
> > each allocator.
> > 
> > Rates for 2.6.31-rc2 (+slqb from Pekka's tree)
> > SLAB: 1869710
> > SLQB: 1859710
> > SLUB: 1769400
> > 
> 
> Great, slqb doesn't regress nearly as much as slub did.

No, although let's see if I can get some remote numbers.

 
> These statistics do show that pulling slab out in favor of slub 
> prematurely is probably inadvisible, however, when the performance 
> achieved with slab in this benchmark is far beyond slub's upper bound.

Well these, and others, I agree. You're not the only one unhappy
with slub performance (and I wouldn't have wasted my time with
slqb if I was happy with it). I think we can give SLUB some more
time though if Christoph has some improvements.

 
> > Now I didn't reboot or restart netperf server during runs, so there
> > is possibility of results drifting for some reason (eg. due to
> > cache/node placment).
> > 
> 
> SLUB should perform slightly better after the first run on a NUMA system 
> since its partial lists (for kmalloc-256 and kmalloc-2048) should be 
> populated with free slabs, which avoid costly page allocations, because of 
> min_partial.

Yeah, although it's probably within the noise really. But I can
turn it into a pseudo-UMA system as well, so I will try with that
too.

Thanks,
Nick


  reply	other threads:[~2009-07-20  8:32 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-09 16:13 [RFC][PATCH] Check write to slab memory which freed already using mudflap Janboe Ye
2009-07-09 16:44 ` Pekka Enberg
2009-07-09 19:40   ` Janboe Ye
2009-07-10  8:47   ` Ingo Molnar
2009-07-10  8:52     ` Pekka Enberg
2009-07-10  9:03       ` Nick Piggin
2009-07-10  9:14         ` Pekka Enberg
2009-07-10  9:29           ` Nick Piggin
2009-07-10  9:40             ` Pekka Enberg
2009-07-10  9:47               ` David Rientjes
2009-07-10  9:51                 ` Pekka Enberg
2009-07-10 10:03                   ` David Rientjes
2009-07-10 10:10                     ` Pekka Enberg
2009-07-10 10:30                       ` Nick Piggin
2009-07-15 14:59                     ` Nick Piggin
2009-07-15 20:19                       ` David Rientjes
2009-07-20  8:32                         ` Nick Piggin [this message]
2009-07-10  9:41             ` David Rientjes
2009-07-10  9:46               ` Pekka Enberg
2009-07-10  9:04     ` David Rientjes
2009-07-10  9:19       ` Nick Piggin
2009-07-10  9:19       ` Pekka Enberg
2009-07-10  9:31         ` David Rientjes
2009-07-10  9:38           ` Nick Piggin
2009-07-10 18:55             ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090720083212.GA7070@wotan.suse.de \
    --to=npiggin@suse.de \
    --cc=cl@linux-foundation.org \
    --cc=fche@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=penberg@cs.helsinki.fi \
    --cc=rientjes@google.com \
    --cc=vegard.nossum@gmail.com \
    --cc=yuan-bo.ye@motorola.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.