From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759437AbcCDPE3 (ORCPT ); Fri, 4 Mar 2016 10:04:29 -0500 Received: from e35.co.us.ibm.com ([32.97.110.153]:45919 "EHLO e35.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755989AbcCDPEW (ORCPT ); Fri, 4 Mar 2016 10:04:22 -0500 X-IBM-Helo: d03dlp01.boulder.ibm.com X-IBM-MailFrom: paulmck@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org;linux-next@vger.kernel.org Date: Fri, 4 Mar 2016 07:04:15 -0800 From: "Paul E. McKenney" To: Stephen Rothwell Cc: Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Peter Zijlstra , linux-next@vger.kernel.org, linux-kernel@vger.kernel.org, Boqun Feng Subject: Re: linux-next: manual merge of the rcu tree with the tip tree Message-ID: <20160304150415.GO3577@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20160304151306.05e3bb36@canb.auug.org.au> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160304151306.05e3bb36@canb.auug.org.au> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16030415-0013-0000-0000-00001D840BBF Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 04, 2016 at 03:13:06PM +1100, Stephen Rothwell wrote: > Hi Paul, > > Today's linux-next merge of the rcu tree got a conflict in: > > kernel/rcu/tree.c > > between commit: > > 27d50c7eeb0f ("rcu: Make CPU_DYING_IDLE an explicit call") > > from the tip tree and commit: > > 67c583a7de34 ("RCU: Privatize rcu_node::lock") > > from the rcu tree. > > I fixed it up (see below) and can carry the fix as necessary (no action > is required). Thank you! I have applied this resolution to -rcu and am testing it. Thanx, Paul > -- > Cheers, > Stephen Rothwell > > diff --cc kernel/rcu/tree.c > index 0bbc1497a0e4,55cea189783f..000000000000 > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@@ -4227,43 -4246,6 +4224,43 @@@ static void rcu_prepare_cpu(int cpu > rcu_init_percpu_data(cpu, rsp); > } > > +#ifdef CONFIG_HOTPLUG_CPU > +/* > + * The CPU is exiting the idle loop into the arch_cpu_idle_dead() > + * function. We now remove it from the rcu_node tree's ->qsmaskinit > + * bit masks. > + */ > +static void rcu_cleanup_dying_idle_cpu(int cpu, struct rcu_state *rsp) > +{ > + unsigned long flags; > + unsigned long mask; > + struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu); > + struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rdp & rnp. */ > + > + if (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) > + return; > + > + /* Remove outgoing CPU from mask in the leaf rcu_node structure. */ > + mask = rdp->grpmask; > + raw_spin_lock_irqsave_rcu_node(rnp, flags); /* Enforce GP memory-order guarantee. */ > + rnp->qsmaskinitnext &= ~mask; > - raw_spin_unlock_irqrestore(&rnp->lock, flags); > ++ raw_spin_unlock_irqrestore_rcu_node(rnp, flags); > +} > + > +void rcu_report_dead(unsigned int cpu) > +{ > + struct rcu_state *rsp; > + > + /* QS for any half-done expedited RCU-sched GP. */ > + preempt_disable(); > + rcu_report_exp_rdp(&rcu_sched_state, > + this_cpu_ptr(rcu_sched_state.rda), true); > + preempt_enable(); > + for_each_rcu_flavor(rsp) > + rcu_cleanup_dying_idle_cpu(cpu, rsp); > +} > +#endif > + > /* > * Handle CPU online/offline notification events. > */ >