From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754763Ab2HaTHr (ORCPT ); Fri, 31 Aug 2012 15:07:47 -0400 Received: from relay4-d.mail.gandi.net ([217.70.183.196]:47292 "EHLO relay4-d.mail.gandi.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754570Ab2HaTHp (ORCPT ); Fri, 31 Aug 2012 15:07:45 -0400 X-Originating-IP: 217.70.178.145 X-Originating-IP: 173.246.103.110 Date: Fri, 31 Aug 2012 12:07:33 -0700 From: Josh Triplett To: "Paul E. McKenney" Cc: linux-kernel@vger.kernel.org, mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca, niv@us.ibm.com, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, eric.dumazet@gmail.com, darren@dvhart.com, fweisbec@gmail.com, sbw@mit.edu, patches@linaro.org, Alessio Igor Bogani , Avi Kivity , Chris Metcalf , Christoph Lameter , Daniel Lezcano , Geoff Levand , Gilad Ben Yossef , Hakan Akkan , Ingo Molnar , Kevin Hilman , Max Krasnyansky , Stephen Hemminger , Sven-Thorsten Dietrich Subject: Re: [PATCH tip/core/rcu 01/26] rcu: New rcu_user_enter() and rcu_user_exit() APIs Message-ID: <20120831190733.GP4259@jtriplet-mobl1> References: <20120830210520.GA2824@linux.vnet.ibm.com> <1346360743-3628-1-git-send-email-paulmck@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1346360743-3628-1-git-send-email-paulmck@linux.vnet.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 30, 2012 at 02:05:18PM -0700, Paul E. McKenney wrote: > From: Frederic Weisbecker > > RCU currently insists that only idle tasks can enter RCU idle mode, which > prohibits an adaptive tickless kernel (AKA nohz cpusets), which in turn > would mean that usermode execution would always take scheduling-clock > interrupts, even when there is only one task runnable on the CPU in > question. > > This commit therefore adds rcu_user_enter() and rcu_user_exit(), which > allow non-idle tasks to enter RCU idle mode. These are quite similar > to rcu_idle_enter() and rcu_idle_exit(), respectively, except that they > omit the idle-task checks. > > [ Updated to use "user" flag rather than separate check functions. ] > > Signed-off-by: Frederic Weisbecker > Signed-off-by: Paul E. McKenney > Cc: Alessio Igor Bogani > Cc: Andrew Morton > Cc: Avi Kivity > Cc: Chris Metcalf > Cc: Christoph Lameter > Cc: Daniel Lezcano > Cc: Geoff Levand > Cc: Gilad Ben Yossef > Cc: Hakan Akkan > Cc: Ingo Molnar > Cc: Kevin Hilman > Cc: Max Krasnyansky > Cc: Peter Zijlstra > Cc: Stephen Hemminger > Cc: Steven Rostedt > Cc: Sven-Thorsten Dietrich > Cc: Thomas Gleixner A few suggestions below: an optional microoptimization and some bugfixes. With the bugfixes, and with or without the microoptimization: Reviewed-by: Josh Triplett > --- a/kernel/rcutree.c > +++ b/kernel/rcutree.c [...] > -static void rcu_idle_enter_common(struct rcu_dynticks *rdtp, long long oldval) > +static void rcu_eqs_enter_common(struct rcu_dynticks *rdtp, long long oldval, > + bool user) > { > trace_rcu_dyntick("Start", oldval, 0); > - if (!is_idle_task(current)) { > + if (!is_idle_task(current) && !user) { Microoptimization: putting the !user check first (here and in the exit function) would allow the compiler to partially inline rcu_eqs_*_common into the two trivial wrappers and constant-fold away the test for !user. > +void rcu_idle_enter(void) > +{ > + rcu_eqs_enter(0); > +} s/0/false/ > +void rcu_user_enter(void) > +{ > + rcu_eqs_enter(1); > +} s/1/true/ > -static void rcu_idle_exit_common(struct rcu_dynticks *rdtp, long long oldval) > +static void rcu_eqs_exit_common(struct rcu_dynticks *rdtp, long long oldval, > + int user) > { > smp_mb__before_atomic_inc(); /* Force ordering w/previous sojourn. */ > atomic_inc(&rdtp->dynticks); > @@ -464,7 +490,7 @@ static void rcu_idle_exit_common(struct rcu_dynticks *rdtp, long long oldval) > WARN_ON_ONCE(!(atomic_read(&rdtp->dynticks) & 0x1)); > rcu_cleanup_after_idle(smp_processor_id()); > trace_rcu_dyntick("End", oldval, rdtp->dynticks_nesting); > - if (!is_idle_task(current)) { > + if (!is_idle_task(current) && !user) { Same micro-optimization as the enter function. > +void rcu_idle_exit(void) > +{ > + rcu_eqs_exit(0); > +} s/0/false/ > +void rcu_user_exit(void) > +{ > + rcu_eqs_exit(1); > +} s/1/true/ > @@ -539,7 +586,7 @@ void rcu_irq_enter(void) > if (oldval) > trace_rcu_dyntick("++=", oldval, rdtp->dynticks_nesting); > else > - rcu_idle_exit_common(rdtp, oldval); > + rcu_eqs_exit_common(rdtp, oldval, 1); s/1/true/, and likewise in rcu_irq_exit. - Josh Triplett