From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758039Ab2IEBWr (ORCPT ); Tue, 4 Sep 2012 21:22:47 -0400 Received: from e35.co.us.ibm.com ([32.97.110.153]:57301 "EHLO e35.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752847Ab2IEBWp (ORCPT ); Tue, 4 Sep 2012 21:22:45 -0400 Date: Tue, 4 Sep 2012 18:22:26 -0700 From: "Paul E. McKenney" To: Josh Triplett Cc: linux-kernel@vger.kernel.org, mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca, niv@us.ibm.com, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, eric.dumazet@gmail.com, darren@dvhart.com, fweisbec@gmail.com, sbw@mit.edu, patches@linaro.org Subject: Re: [PATCH tip/core/rcu 02/23] rcu: Allow RCU grace-period initialization to be preempted Message-ID: <20120905012226.GD2593@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20120830181811.GA29154@linux.vnet.ibm.com> <1346350718-30937-1-git-send-email-paulmck@linux.vnet.ibm.com> <1346350718-30937-2-git-send-email-paulmck@linux.vnet.ibm.com> <20120902010935.GB5713@leaf> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20120902010935.GB5713@leaf> User-Agent: Mutt/1.5.21 (2010-09-15) X-Content-Scanned: Fidelis XPS MAILER x-cbid: 12090501-6148-0000-0000-000009470548 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Sep 01, 2012 at 06:09:35PM -0700, Josh Triplett wrote: > On Thu, Aug 30, 2012 at 11:18:17AM -0700, Paul E. McKenney wrote: > > From: "Paul E. McKenney" > > > > RCU grace-period initialization is currently carried out with interrupts > > disabled, which can result in 200-microsecond latency spikes on systems > > on which RCU has been configured for 4096 CPUs. This patch therefore > > makes the RCU grace-period initialization be preemptible, which should > > eliminate those latency spikes. Similar spikes from grace-period cleanup > > and the forcing of quiescent states will be dealt with similarly by later > > patches. > > > > Reported-by: Mike Galbraith > > Reported-by: Dimitri Sivanich > > Signed-off-by: Paul E. McKenney > > Does it make sense to have cond_resched() right before the continues, > which lead right back up to the wait_event_interruptible at the top of > the loop? Or do you expect to usually find that event already > signalled? Given that the event might have already signaled, I need to put the cond_resched() into the loop. Otherwise, we get back long latencies. Thanx, Paul > In any case: > > Reviewed-by: Josh Triplett > > > kernel/rcutree.c | 17 ++++++++++------- > > 1 files changed, 10 insertions(+), 7 deletions(-) > > > > diff --git a/kernel/rcutree.c b/kernel/rcutree.c > > index e1c5868..ef56aa3 100644 > > --- a/kernel/rcutree.c > > +++ b/kernel/rcutree.c > > @@ -1069,6 +1069,7 @@ static int rcu_gp_kthread(void *arg) > > * don't start another one. > > */ > > raw_spin_unlock_irqrestore(&rnp->lock, flags); > > + cond_resched(); > > continue; > > } > > > > @@ -1079,6 +1080,7 @@ static int rcu_gp_kthread(void *arg) > > */ > > rsp->fqs_need_gp = 1; > > raw_spin_unlock_irqrestore(&rnp->lock, flags); > > + cond_resched(); > > continue; > > } > > > > @@ -1089,10 +1091,10 @@ static int rcu_gp_kthread(void *arg) > > rsp->fqs_state = RCU_GP_INIT; /* Stop force_quiescent_state. */ > > rsp->jiffies_force_qs = jiffies + RCU_JIFFIES_TILL_FORCE_QS; > > record_gp_stall_check_time(rsp); > > - raw_spin_unlock(&rnp->lock); /* leave irqs disabled. */ > > + raw_spin_unlock_irqrestore(&rnp->lock, flags); > > > > /* Exclude any concurrent CPU-hotplug operations. */ > > - raw_spin_lock(&rsp->onofflock); /* irqs already disabled. */ > > + get_online_cpus(); > > > > /* > > * Set the quiescent-state-needed bits in all the rcu_node > > @@ -1112,7 +1114,7 @@ static int rcu_gp_kthread(void *arg) > > * due to the fact that we have irqs disabled. > > */ > > rcu_for_each_node_breadth_first(rsp, rnp) { > > - raw_spin_lock(&rnp->lock); /* irqs already disabled. */ > > + raw_spin_lock_irqsave(&rnp->lock, flags); > > rcu_preempt_check_blocked_tasks(rnp); > > rnp->qsmask = rnp->qsmaskinit; > > rnp->gpnum = rsp->gpnum; > > @@ -1123,15 +1125,16 @@ static int rcu_gp_kthread(void *arg) > > trace_rcu_grace_period_init(rsp->name, rnp->gpnum, > > rnp->level, rnp->grplo, > > rnp->grphi, rnp->qsmask); > > - raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */ > > + raw_spin_unlock_irqrestore(&rnp->lock, flags); > > + cond_resched(); > > } > > > > rnp = rcu_get_root(rsp); > > - raw_spin_lock(&rnp->lock); /* irqs already disabled. */ > > + raw_spin_lock_irqsave(&rnp->lock, flags); > > /* force_quiescent_state() now OK. */ > > rsp->fqs_state = RCU_SIGNAL_INIT; > > - raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */ > > - raw_spin_unlock_irqrestore(&rsp->onofflock, flags); > > + raw_spin_unlock_irqrestore(&rnp->lock, flags); > > + put_online_cpus(); > > } > > return 0; > > } > > -- > > 1.7.8 > > >