From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753725AbcEPOAq (ORCPT ); Mon, 16 May 2016 10:00:46 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:39385 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752744AbcEPOAo (ORCPT ); Mon, 16 May 2016 10:00:44 -0400 Date: Mon, 16 May 2016 16:00:32 +0200 From: Peter Zijlstra To: Michael Neuling Cc: Matt Fleming , mingo@kernel.org, linux-kernel@vger.kernel.org, clm@fb.com, mgalbraith@suse.de, tglx@linutronix.de, fweisbec@gmail.com, srikar@linux.vnet.ibm.com, anton@samba.org, oliver , "Shreyas B. Prabhu" Subject: Re: [RFC][PATCH 4/7] sched: Replace sd_busy/nr_busy_cpus with sched_domain_shared Message-ID: <20160516140032.GQ3193@twins.programming.kicks-ass.net> References: <20160509104807.284575300@infradead.org> <20160509105210.642395937@infradead.org> <20160511115555.GT2839@codeblueprint.co.uk> <20160511123345.GD3192@twins.programming.kicks-ass.net> <20160511182402.GD3205@twins.programming.kicks-ass.net> <1463018737.28449.38.camel@neuling.org> <20160512050750.GK3192@twins.programming.kicks-ass.net> <1463051272.28449.59.camel@neuling.org> <20160512113359.GO3192@twins.programming.kicks-ass.net> <1463098346.25753.15.camel@neuling.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1463098346.25753.15.camel@neuling.org> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 13, 2016 at 10:12:26AM +1000, Michael Neuling wrote: > > Basically; and if so, if its cheap enough to shoot a task to an idle > > core to avoid queueing. Assuming there still is some cache residency on > > the old core, the inter-core fill should be much cheaper than fetching > > it off package (either remote cache or dram). > > So I think that will apply on POWER8. > > In 10.4.2 it says "The L3.1 ECO Caches will be snooped and provide > intervention data similar to the L2 and L3.0 caches on the > chip"  That should be much faster than going to another chip or DIMM. > > So migrating to another core on the same chip should be faster than off > chip. OK; so something like the below might be what you want to play with. --- arch/powerpc/kernel/smp.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 55c924b65f71..1a54fa8a3323 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -782,6 +782,23 @@ static struct sched_domain_topology_level powerpc_topology[] = { { NULL, }, }; +static struct sched_domain_topology_level powerpc8_topology[] = { +#ifdef CONFIG_SCHED_SMT + { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, +#endif +#ifdef CONFIG_SCHED_MC + /* + * Model the L3.1 cache and sets the LLC as the whole package. + * + * This also ensures we try and move woken tasks to idle cores inside + * the package to avoid queueing. + */ + { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) }, +#endif + { cpu_cpu_mask, SD_INIT_NAME(DIE) }, + { NULL, }, +}; + void __init smp_cpus_done(unsigned int max_cpus) { cpumask_var_t old_mask; @@ -806,7 +823,10 @@ void __init smp_cpus_done(unsigned int max_cpus) dump_numa_cpu_topology(); - set_sched_topology(powerpc_topology); + if (cpu_has_feature(CPU_FTRS_POWER8)) + set_sched_topology(powerpc8_topology); + else + set_sched_topology(powerpc_topology); }