From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754637AbbJGQ07 (ORCPT ); Wed, 7 Oct 2015 12:26:59 -0400 Received: from e31.co.us.ibm.com ([32.97.110.149]:60050 "EHLO e31.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752518AbbJGQ05 (ORCPT ); Wed, 7 Oct 2015 12:26:57 -0400 X-IBM-Helo: d03dlp03.boulder.ibm.com X-IBM-MailFrom: paulmck@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org Date: Wed, 7 Oct 2015 09:26:53 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: linux-kernel@vger.kernel.org, mingo@kernel.org, jiangshanlai@gmail.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, dvhart@linux.intel.com, fweisbec@gmail.com, oleg@redhat.com, bobby.prani@gmail.com Subject: Re: [PATCH tip/core/rcu 18/18] rcu: Better hotplug handling for synchronize_sched_expedited() Message-ID: <20151007162653.GP3910@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20151006162907.GA12020@linux.vnet.ibm.com> <1444148977-14108-1-git-send-email-paulmck@linux.vnet.ibm.com> <1444148977-14108-18-git-send-email-paulmck@linux.vnet.ibm.com> <20151007142627.GE3604@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20151007142627.GE3604@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15100716-8236-0000-0000-00001280C64E Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Oct 07, 2015 at 04:26:27PM +0200, Peter Zijlstra wrote: > On Tue, Oct 06, 2015 at 09:29:37AM -0700, Paul E. McKenney wrote: > > void rcu_sched_qs(void) > > { > > + unsigned long flags; > > + > > if (__this_cpu_read(rcu_sched_data.cpu_no_qs.s)) { > > trace_rcu_grace_period(TPS("rcu_sched"), > > __this_cpu_read(rcu_sched_data.gpnum), > > TPS("cpuqs")); > > __this_cpu_write(rcu_sched_data.cpu_no_qs.b.norm, false); > > + if (!__this_cpu_read(rcu_sched_data.cpu_no_qs.b.exp)) > > + return; > > + local_irq_save(flags); > > if (__this_cpu_read(rcu_sched_data.cpu_no_qs.b.exp)) { > > __this_cpu_write(rcu_sched_data.cpu_no_qs.b.exp, false); > > rcu_report_exp_rdp(&rcu_sched_state, > > this_cpu_ptr(&rcu_sched_data), > > true); > > } > > + local_irq_restore(flags); > > } > > } > > *sigh*.. still rare I suppose, but should we look at doing something > like this? Indeed, that approach looks better than moving rcu_note_context_switch(), which probably results in deadlocks. I will update my patch accordingly. Thanx, Paul > --- > kernel/sched/core.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index fe819298c220..3d830c3491c4 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -3050,7 +3050,6 @@ static void __sched __schedule(void) > > cpu = smp_processor_id(); > rq = cpu_rq(cpu); > - rcu_note_context_switch(); > prev = rq->curr; > > schedule_debug(prev); > @@ -3058,13 +3057,16 @@ static void __sched __schedule(void) > if (sched_feat(HRTICK)) > hrtick_clear(rq); > > + local_irq_disable(); > + rcu_note_context_switch(); > + > /* > * Make sure that signal_pending_state()->signal_pending() below > * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE) > * done by the caller to avoid the race with signal_wake_up(). > */ > smp_mb__before_spinlock(); > - raw_spin_lock_irq(&rq->lock); > + raw_spin_lock(&rq->lock); > lockdep_pin_lock(&rq->lock); > > rq->clock_skip_update <<= 1; /* promote REQ to ACT */ >