linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Anton Blanchard <anton@samba.org>
To: mingo@elte.hu, peterz@infradead.org, benh@kernel.crashing.org
Cc: fenghua.yu@intel.com, tony.luck@intel.com,
	linux-kernel@vger.kernel.org, ralf@linux-mips.org,
	lethal@linux-sh.org, cmetcalf@tilera.com,
	linuxppc-dev@lists.ozlabs.org, davem@davemloft.net
Subject: [PATCH 1/5] powerpc/numa: Enable SD_WAKE_AFFINE in node definition
Date: Mon, 25 Jul 2011 12:33:12 +1000	[thread overview]
Message-ID: <20110725023426.037449739@samba.org> (raw)
In-Reply-To: 20110725023311.175792493@samba.org

When chasing a performance issue on ppc64, I noticed tasks
communicating via a pipe would often end up on different nodes.

It turns out SD_WAKE_AFFINE is not set in our node defition. Commit
9fcd18c9e63e (sched: re-tune balancing) enabled SD_WAKE_AFFINE
in the node definition for x86 and we need a similar change for
ppc64.

I used lmbench lat_ctx and perf bench pipe to verify this fix. Each
benchmark was run 10 times and the average taken.


lmbench lat_ctx:

before:  66565 ops/sec
after:  204700 ops/sec

3.1x faster


perf bench pipe:

before: 5.6570 usecs
after:  1.3470 usecs

4.2x faster


Signed-off-by: Anton Blanchard <anton@samba.org>
---

Cc-ing arch maintainers who might need to look at their SD_NODE_INIT
definitions

Index: linux-2.6-work/arch/powerpc/include/asm/topology.h
===================================================================
--- linux-2.6-work.orig/arch/powerpc/include/asm/topology.h	2011-07-18 16:24:55.639949552 +1000
+++ linux-2.6-work/arch/powerpc/include/asm/topology.h	2011-07-18 16:25:02.630074557 +1000
@@ -73,7 +73,7 @@ static inline int pcibus_to_node(struct
 				| 1*SD_BALANCE_EXEC			\
 				| 1*SD_BALANCE_FORK			\
 				| 0*SD_BALANCE_WAKE			\
-				| 0*SD_WAKE_AFFINE			\
+				| 1*SD_WAKE_AFFINE			\
 				| 0*SD_PREFER_LOCAL			\
 				| 0*SD_SHARE_CPUPOWER			\
 				| 0*SD_POWERSAVINGS_BALANCE		\

  reply	other threads:[~2011-07-25  2:33 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-07-25  2:33 [PATCH 0/5] ppc64 scheduler fixes Anton Blanchard
2011-07-25  2:33 ` Anton Blanchard [this message]
2011-07-25  2:33 ` [PATCH 2/5] sched: Allow SD_NODES_PER_DOMAIN to be overridden Anton Blanchard
2011-07-25  2:33 ` [PATCH 3/5] powerpc/numa: Increase SD_NODES_PER_DOMAIN to 32 Anton Blanchard
2011-07-25  2:33 ` [PATCH 4/5] powerpc/numa: Disable NEWIDLE balancing at node level Anton Blanchard
2011-07-25  2:33 ` [PATCH 5/5] powerpc/numa: Remove duplicate RECLAIM_DISTANCE definition Anton Blanchard
2011-07-25 12:41 ` [PATCH 0/5] ppc64 scheduler fixes Peter Zijlstra
2011-09-20  0:19   ` Anton Blanchard
2011-09-20  1:38     ` Benjamin Herrenschmidt
2011-09-20  8:07     ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110725023426.037449739@samba.org \
    --to=anton@samba.org \
    --cc=benh@kernel.crashing.org \
    --cc=cmetcalf@tilera.com \
    --cc=davem@davemloft.net \
    --cc=fenghua.yu@intel.com \
    --cc=lethal@linux-sh.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mingo@elte.hu \
    --cc=peterz@infradead.org \
    --cc=ralf@linux-mips.org \
    --cc=tony.luck@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).