From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752754AbcELLeQ (ORCPT ); Thu, 12 May 2016 07:34:16 -0400 Received: from merlin.infradead.org ([205.233.59.134]:60735 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751364AbcELLeP (ORCPT ); Thu, 12 May 2016 07:34:15 -0400 Date: Thu, 12 May 2016 13:33:59 +0200 From: Peter Zijlstra To: Michael Neuling Cc: Matt Fleming , mingo@kernel.org, linux-kernel@vger.kernel.org, clm@fb.com, mgalbraith@suse.de, tglx@linutronix.de, fweisbec@gmail.com, srikar@linux.vnet.ibm.com, anton@samba.org, oliver , "Shreyas B. Prabhu" Subject: Re: [RFC][PATCH 4/7] sched: Replace sd_busy/nr_busy_cpus with sched_domain_shared Message-ID: <20160512113359.GO3192@twins.programming.kicks-ass.net> References: <20160509104807.284575300@infradead.org> <20160509105210.642395937@infradead.org> <20160511115555.GT2839@codeblueprint.co.uk> <20160511123345.GD3192@twins.programming.kicks-ass.net> <20160511182402.GD3205@twins.programming.kicks-ass.net> <1463018737.28449.38.camel@neuling.org> <20160512050750.GK3192@twins.programming.kicks-ass.net> <1463051272.28449.59.camel@neuling.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1463051272.28449.59.camel@neuling.org> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 12, 2016 at 09:07:52PM +1000, Michael Neuling wrote: > On Thu, 2016-05-12 at 07:07 +0200, Peter Zijlstra wrote: > > But as per the above, Power7 and Power8 have explicit logic to share the > > per-core L3 with the other cores. > > > > How effective is that? From some of the slides/documents i've looked at > > the L3s are connected with a high-speed fabric. Suggesting that the > > cross-core sharing should be fairly efficient. > > I'm not sure.  I thought it was mostly private but if another core was > sleeping or not experiencing much cache pressure, another core could use it > for some things. But I'm fuzzy on the the exact properties, sorry. Right; I'm going by bits and pieces found on the tubes, so I'm just guessing ;-) But it sounds like these L3s are nowhere close to what Intel does with their L3, where each core has an L3 slice, and slices are connected on a ring to form a unified/shared cache across all cores. http://www.realworldtech.com/sandy-bridge/8/ > > In which case it would make sense to treat/model the combined L3 as a > > single large LLC covering all cores. > > Are you thinking it would be much cheaper to migrate a task to another core > inside this chip, than to off chip? Basically; and if so, if its cheap enough to shoot a task to an idle core to avoid queueing. Assuming there still is some cache residency on the old core, the inter-core fill should be much cheaper than fetching it off package (either remote cache or dram). Or at least; so goes my reasoning based on my google results.