From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751810AbcELFIE (ORCPT ); Thu, 12 May 2016 01:08:04 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:43343 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751594AbcELFID (ORCPT ); Thu, 12 May 2016 01:08:03 -0400 Date: Thu, 12 May 2016 07:07:50 +0200 From: Peter Zijlstra To: Michael Neuling Cc: Matt Fleming , mingo@kernel.org, linux-kernel@vger.kernel.org, clm@fb.com, mgalbraith@suse.de, tglx@linutronix.de, fweisbec@gmail.com, srikar@linux.vnet.ibm.com, anton@samba.org, oliver , "Shreyas B. Prabhu" Subject: Re: [RFC][PATCH 4/7] sched: Replace sd_busy/nr_busy_cpus with sched_domain_shared Message-ID: <20160512050750.GK3192@twins.programming.kicks-ass.net> References: <20160509104807.284575300@infradead.org> <20160509105210.642395937@infradead.org> <20160511115555.GT2839@codeblueprint.co.uk> <20160511123345.GD3192@twins.programming.kicks-ass.net> <20160511182402.GD3205@twins.programming.kicks-ass.net> <1463018737.28449.38.camel@neuling.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1463018737.28449.38.camel@neuling.org> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 12, 2016 at 12:05:37PM +1000, Michael Neuling wrote: > On Wed, 2016-05-11 at 20:24 +0200, Peter Zijlstra wrote: > > On Wed, May 11, 2016 at 02:33:45PM +0200, Peter Zijlstra wrote: > > > > > > Hmm, PPC folks; what does your topology look like? > > > > > > Currently your sched_domain_topology, as per arch/powerpc/kernel/smp.c > > > seems to suggest your cores do not share cache at all. > > > > > > https://en.wikipedia.org/wiki/POWER7 seems to agree and states > > > > > >   "4 MB L3 cache per C1 core" > > > > > > And http://www-03.ibm.com/systems/resources/systems_power_software_i_pe > > > rfmgmt_underthehood.pdf > > > also explicitly draws pictures with the L3 per core. > > > > > > _however_, that same document describes L3 inter-core fill and lateral > > > cast-out, which sounds like the L3s work together to form a node wide > > > caching system. > > > > > > Do we want to model this co-operative L3 slices thing as a sort of > > > node-wide LLC for the purpose of the scheduler ? > > Going back a generation; Power6 seems to have a shared L3 (off package) > > between the two cores on the package. The current topology does not > > reflect that at all. > > > > And going forward a generation; Power8 seems to share the per-core > > (chiplet) L3 amonst all cores (chiplets) + is has the centaur (memory > > controller) 16M L4. > > Yep, L1/L2/L3 is per core on POWER8 and POWER7.  POWER6 and POWER5 (both > dual core chips) had a shared off chip cache But as per the above, Power7 and Power8 have explicit logic to share the per-core L3 with the other cores. How effective is that? From some of the slides/documents i've looked at the L3s are connected with a high-speed fabric. Suggesting that the cross-core sharing should be fairly efficient. In which case it would make sense to treat/model the combined L3 as a single large LLC covering all cores.