From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935044AbdDSPkx (ORCPT ); Wed, 19 Apr 2017 11:40:53 -0400 Received: from bombadil.infradead.org ([65.50.211.133]:45988 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1765633AbdDSPkv (ORCPT ); Wed, 19 Apr 2017 11:40:51 -0400 Date: Wed, 19 Apr 2017 17:40:40 +0200 From: Peter Zijlstra To: "Paul E. McKenney" Cc: linux-kernel@vger.kernel.org, mingo@kernel.org, jiangshanlai@gmail.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, bobby.prani@gmail.com Subject: Re: [PATCH tip/core/rcu 04/13] rcu: Make RCU_FANOUT_LEAF help text more explicit about skew_tick Message-ID: <20170419154040.knkdg2j6awrp74ua@hirez.programming.kicks-ass.net> References: <20170413165516.GI3956@linux.vnet.ibm.com> <20170413170434.xk4zq3p75pu3ubxw@hirez.programming.kicks-ass.net> <20170413173100.GL3956@linux.vnet.ibm.com> <20170413174631.56ycg545gwbsb4q2@hirez.programming.kicks-ass.net> <20170413181926.GP3956@linux.vnet.ibm.com> <20170413182309.vmyivo3oqrtfhhxt@hirez.programming.kicks-ass.net> <20170413184232.GQ3956@linux.vnet.ibm.com> <20170419132226.yvo3jyweb3d2a632@hirez.programming.kicks-ass.net> <20170419134835.bpuhurle2jjr66hm@hirez.programming.kicks-ass.net> <20170419150809.GL3956@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170419150809.GL3956@linux.vnet.ibm.com> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Apr 19, 2017 at 08:08:09AM -0700, Paul E. McKenney wrote: > And even that would not be completely sufficient. After all, the state > in the leaf rcu_node structure will be out of date during grace-period > initialization and cleanup. So to -completely- synchronize state for > the incoming CPU, I would have to acquire the root rcu_node structure's > lock and look at the live state. Needless to say, the performance and > scalability implications of acquiring a global lock on each and every > idle exit event is not going to be at all pretty. Arguably you could use a seqlock to read the global state. Will still ponder things a bit more, esp. those bugs you pointed me at from just reading gpnum.