From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755483AbdDMSmn (ORCPT ); Thu, 13 Apr 2017 14:42:43 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:35990 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752233AbdDMSml (ORCPT ); Thu, 13 Apr 2017 14:42:41 -0400 Date: Thu, 13 Apr 2017 11:42:32 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: linux-kernel@vger.kernel.org, mingo@kernel.org, jiangshanlai@gmail.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, bobby.prani@gmail.com Subject: Re: [PATCH tip/core/rcu 04/13] rcu: Make RCU_FANOUT_LEAF help text more explicit about skew_tick Reply-To: paulmck@linux.vnet.ibm.com References: <1492016149-18834-4-git-send-email-paulmck@linux.vnet.ibm.com> <20170413091535.r6iw7s3pc2znvl6b@hirez.programming.kicks-ass.net> <20170413160332.GZ3956@linux.vnet.ibm.com> <20170413161948.ymvzlzhporgmldvn@hirez.programming.kicks-ass.net> <20170413165516.GI3956@linux.vnet.ibm.com> <20170413170434.xk4zq3p75pu3ubxw@hirez.programming.kicks-ass.net> <20170413173100.GL3956@linux.vnet.ibm.com> <20170413174631.56ycg545gwbsb4q2@hirez.programming.kicks-ass.net> <20170413181926.GP3956@linux.vnet.ibm.com> <20170413182309.vmyivo3oqrtfhhxt@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170413182309.vmyivo3oqrtfhhxt@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17041318-2213-0000-0000-000001908EAF X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00006930; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000208; SDB=6.00847104; UDB=6.00417908; IPR=6.00625537; BA=6.00005288; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00015034; XFM=3.00000013; UTC=2017-04-13 18:42:36 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17041318-2214-0000-0000-00005577C181 Message-Id: <20170413184232.GQ3956@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-04-13_15:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=2 spamscore=2 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1702020001 definitions=main-1704130156 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 13, 2017 at 08:23:09PM +0200, Peter Zijlstra wrote: > On Thu, Apr 13, 2017 at 11:19:26AM -0700, Paul E. McKenney wrote: > > > First get me some system-level data showing that the current layout is > > causing a real problem. RCU's fastpath code doesn't come anywhere near > > the rcu_node tree, so in the absence of such data, I of course remain > > quite doubtful that there is a real need. And painfully aware of the > > required increase in complexity. > > > > But if there is a real need demonstrated by real system-level data, > > I will of course make the needed changes, as I have done many times in > > the past in response to other requests. > > I read what you wrote here: > > > > > Increasing it reduces the number of rcu_node structures, and thus the > > > > number of cache misses during grace-period initialization and cleanup. > > > > This has proven necessary in the past on large machines having long > > > > memory latencies. And there are starting to be some pretty big machines > > > > running in production, and even for typical commerical workloads. > > to mean you had exactly that pain. Or am I now totally not understanding > you? I believe that you are missing the fact that RCU grace-period initialization and cleanup walks through the rcu_node tree breadth first, using rcu_for_each_node_breadth_first(). This macro (shown below) implements this breadth-first walk using a simple sequential traversal of the ->node[] array that provides the structures making up the rcu_node tree. As you can see, this scan is completely independent of how CPU numbers might be mapped to rcu_data slots in the leaf rcu_node structures. Thanx, Paul /* * Do a full breadth-first scan of the rcu_node structures for the * specified rcu_state structure. */ #define rcu_for_each_node_breadth_first(rsp, rnp) \ for ((rnp) = &(rsp)->node[0]; \ (rnp) < &(rsp)->node[rcu_num_nodes]; (rnp)++)