From: Eric Dumazet <eric.dumazet@gmail.com>
To: Christoph Lameter <cl@linux.com>
Cc: linux-kernel <linux-kernel@vger.kernel.org>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
"Alex,Shi" <alex.shi@intel.com>,
Shaohua Li <shaohua.li@intel.com>, Matt Mackall <mpm@selenic.com>
Subject: Re: [PATCH] slub: prefetch next freelist pointer in slab_alloc()
Date: Fri, 16 Dec 2011 18:18:35 +0100 [thread overview]
Message-ID: <1324055915.25554.69.camel@edumazet-HP-Compaq-6005-Pro-SFF-PC> (raw)
In-Reply-To: <alpine.DEB.2.00.1112161030270.26651@router.home>
Le vendredi 16 décembre 2011 à 10:31 -0600, Christoph Lameter a écrit :
> On Fri, 16 Dec 2011, Eric Dumazet wrote:
>
> > Recycling a page is a problem, since freelist link chain is hot on
> > cpu(s) which freed objects, and possibly very cold on cpu currently
> > owning slab.
>
> Good idea. How do the tcp benchmarks look after this?
>
> Looks sane.
>
> Acked-by: Christoph Lameter <cl@linux.com>
Thanks !
I wouldnt expect TCP being a huge win (most of cpu is consumed in tcp
stack, not really memory allocations), but still...
[I expect much better gain on an UDP load, where memory allocator costs
are higher ]
$ cat netperf.sh
for in in `seq 1 32`
do
netperf -H 192.168.20.110 -v 0 -l -100000 -t TCP_RR &
done
wait
If cpu0 handles network interrupts, and other cpus run applications :
Before
Performance counter stats for './netperf.sh':
38001,927957 task-clock # 2,344 CPUs utilized
3 306 138 context-switches # 0,087 M/sec
79 CPU-migrations # 0,000 M/sec
9 656 page-faults # 0,000 M/sec
83 564 329 446 cycles # 2,199 GHz
61 350 744 867 stalled-cycles-frontend # 73,42% frontend cycles idle
34 907 541 687 stalled-cycles-backend # 41,77% backend cycles idle
44 739 971 752 instructions # 0,54 insns per cycle
# 1,37 stalled cycles per insn
8 662 005 669 branches # 227,936 M/sec
249 555 153 branch-misses # 2,88% of all branches
16,214220448 seconds time elapsed
After :
Performance counter stats for './netperf.sh':
37035,347847 task-clock # 2,374 CPUs utilized
3 314 540 context-switches # 0,089 M/sec
131 CPU-migrations # 0,000 M/sec
9 691 page-faults # 0,000 M/sec
81 783 678 294 cycles # 2,208 GHz
59 595 242 695 stalled-cycles-frontend # 72,87% frontend cycles idle
34 367 813 304 stalled-cycles-backend # 42,02% backend cycles idle
44 698 853 546 instructions # 0,55 insns per cycle
# 1,33 stalled cycles per insn
8 654 940 308 branches # 233,694 M/sec
245 578 562 branch-misses # 2,84% of all branches
15,597940419 seconds time elapsed
next prev parent reply other threads:[~2011-12-16 17:18 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-12-16 15:25 [PATCH] slub: prefetch next freelist pointer in slab_alloc() Eric Dumazet
2011-12-16 16:31 ` Christoph Lameter
2011-12-16 17:18 ` Eric Dumazet [this message]
2011-12-17 22:56 ` Eric Dumazet
2012-01-24 19:54 ` Pekka Enberg
2012-01-30 21:32 ` Geert Uytterhoeven
2012-01-30 21:53 ` Christoph Lameter
2012-02-09 20:00 ` Geert Uytterhoeven
2011-12-18 22:47 ` David Rientjes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1324055915.25554.69.camel@edumazet-HP-Compaq-6005-Pro-SFF-PC \
--to=eric.dumazet@gmail.com \
--cc=alex.shi@intel.com \
--cc=cl@linux.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mpm@selenic.com \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=shaohua.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).