linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: "Paul E. McKenney" <paulmck@kernel.org>
Cc: tglx@linutronix.de, frederic@kernel.org,
	linux-kernel@vger.kernel.org, x86@kernel.org, cai@lca.pw,
	mgorman@techsingularity.net, joel@joelfernandes.org
Subject: Re: [RFC][PATCH 4/7] smp: Optimize send_call_function_single_ipi()
Date: Wed, 27 May 2020 18:35:43 +0200	[thread overview]
Message-ID: <20200527163543.GA706478@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20200527155656.GU2869@paulmck-ThinkPad-P72>

On Wed, May 27, 2020 at 08:56:56AM -0700, Paul E. McKenney wrote:
> On Wed, May 27, 2020 at 12:15:13PM +0200, Peter Zijlstra wrote:

> > At first glance, something like the below could work. But obviously I
> > might have overlooked something more subtle than a brick :-)
> 
> This can work, but only if the call from the idle loop is a place where
> either RCU isn't watching on the one hand or that cannot be in an RCU
> read-side critical section on the other. 

Guaranteed no RCU read side, although the call is in a place where RCU
is active again, is that a problem? I think with a bit of work I can
move it to where RCU is still idle.

> Because rcu_exp_handler() assumes that if this function returns true,
> we are not in an RCU read-side critical section.  (I would expect this
> to be the case, but I figured that I should make it explicit.)

Indeed, I shall put a comment in the idle look to make sure it stays that way.

> > ---
> > 
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 90c8be22d57a..0792c032a972 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -426,8 +426,11 @@ EXPORT_SYMBOL_GPL(rcu_momentary_dyntick_idle);
> >   */
> 
> Could we please have a comment noting the change in semantics and
> the reason?

A Changelog you mean? Sure, I can do, but I wasn't nowhere confident
enough in the change to even bother trying to write one.

> >  static int rcu_is_cpu_rrupt_from_idle(void)
> >  {
> > -	/* Called only from within the scheduling-clock interrupt */
> > -	lockdep_assert_in_irq();
> > +	/*
> > +	 * Usually called from the tick; but also used from smp_call_function()
> > +	 * for expedited grace periods.
> > +	 */
> > +	lockdep_assert_irqs_disabled();
> >  
> >  	/* Check for counter underflows */
> >  	RCU_LOCKDEP_WARN(__this_cpu_read(rcu_data.dynticks_nesting) < 0,
> > @@ -435,8 +438,11 @@ static int rcu_is_cpu_rrupt_from_idle(void)
> >  	RCU_LOCKDEP_WARN(__this_cpu_read(rcu_data.dynticks_nmi_nesting) <= 0,
> >  			 "RCU dynticks_nmi_nesting counter underflow/zero!");
> >  
> > -	/* Are we at first interrupt nesting level? */
> > -	if (__this_cpu_read(rcu_data.dynticks_nmi_nesting) != 1)
> > +	/*
> > +	 * Are we at first interrupt nesting level? -- or below, when running
> > +	 * directly from the idle loop itself.
> > +	 */
> > +	if (__this_cpu_read(rcu_data.dynticks_nmi_nesting) > 1)
> 
> Wouldn't it also be a good idea to check that we are in the context of
> an idle thread?  Just in case some idiot like me drops a call to this
> function in the wrong place, for example, if I were to mistakenly remember
> the old semantics where it would return false from process context?
> 
> Maybe something like this?
> 
> 	nesting = __this_cpu_read(rcu_data.dynticks_nmi_nesting;
> 	if (nesting > 1)
> 		return false;
> 	WARN_ON_ONCE(!nesting && !is_idle_task(current));

Yep, that should do.

> >  		return false;
> >  
> >  	/* Does CPU appear to be idle from an RCU standpoint? */
> 
> And let's check the other callers:
> 
> rcu_sched_clock_irq():  This will always be called from IRQ (right?), so
> 	no problem.
> 
> rcu_pending():  Only called from rcu_sched_clock_irq(), so still no problem.
> 
> rcu_flavor_sched_clock_irq(): Ditto for both definitions.

Right, I went though them, didn't find anything obvious amiss. OK, let
me do a nicer patch.

  reply	other threads:[~2020-05-27 16:36 UTC|newest]

Thread overview: 62+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-26 16:10 [RFC][PATCH 0/7] Fix the scheduler-IPI mess Peter Zijlstra
2020-05-26 16:10 ` [RFC][PATCH 1/7] sched: Fix smp_call_function_single_async() usage for ILB Peter Zijlstra
2020-05-26 23:56   ` Frederic Weisbecker
2020-05-27 10:23   ` Vincent Guittot
2020-05-27 11:28     ` Frederic Weisbecker
2020-05-27 12:07       ` Vincent Guittot
2020-05-29 15:26   ` Valentin Schneider
2020-06-01  9:52   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
2020-06-01 11:40     ` Frederic Weisbecker
2020-05-26 16:10 ` [RFC][PATCH 2/7] smp: Optimize flush_smp_call_function_queue() Peter Zijlstra
2020-05-28 12:28   ` Frederic Weisbecker
2020-06-01  9:52   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
2020-05-26 16:11 ` [RFC][PATCH 3/7] smp: Move irq_work_run() out of flush_smp_call_function_queue() Peter Zijlstra
2020-05-29 13:04   ` Frederic Weisbecker
2020-06-01  9:52   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
2020-05-26 16:11 ` [RFC][PATCH 4/7] smp: Optimize send_call_function_single_ipi() Peter Zijlstra
2020-05-27  9:56   ` Peter Zijlstra
2020-05-27 10:15     ` Peter Zijlstra
2020-05-27 15:56       ` Paul E. McKenney
2020-05-27 16:35         ` Peter Zijlstra [this message]
2020-05-27 17:12           ` Peter Zijlstra
2020-05-27 19:39             ` Paul E. McKenney
2020-05-28  1:35               ` Joel Fernandes
2020-05-28  8:59             ` [tip: core/rcu] rcu: Allow for smp_call_function() running callbacks from idle tip-bot2 for Peter Zijlstra
2021-01-21 16:56             ` [RFC][PATCH 4/7] smp: Optimize send_call_function_single_ipi() Peter Zijlstra
2021-01-22  0:20               ` Paul E. McKenney
2021-01-22  8:31                 ` Peter Zijlstra
2021-01-22 15:35                   ` Paul E. McKenney
2020-05-29 13:01   ` Frederic Weisbecker
2020-06-01  9:52   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
2020-05-26 16:11 ` [RFC][PATCH 5/7] irq_work, smp: Allow irq_work on call_single_queue Peter Zijlstra
2020-05-28 23:40   ` Frederic Weisbecker
2020-05-29 13:36     ` Peter Zijlstra
2020-06-05  9:37       ` Peter Zijlstra
2020-06-05 15:02         ` Frederic Weisbecker
2020-06-05 16:17           ` Peter Zijlstra
2020-06-05 15:24         ` Kees Cook
2020-06-10 13:24         ` Frederic Weisbecker
2020-06-01  9:52   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
2020-05-26 16:11 ` [RFC][PATCH 6/7] sched: Add rq::ttwu_pending Peter Zijlstra
2020-06-01  9:52   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
2020-05-26 16:11 ` [RFC][PATCH 7/7] sched: Replace rq::wake_list Peter Zijlstra
2020-05-29 15:10   ` Valdis Klētnieks
2020-06-01  9:52   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
2020-06-02 15:16     ` Frederic Weisbecker
2020-06-04 14:18   ` [RFC][PATCH 7/7] " Guenter Roeck
2020-06-05  0:24     ` Eric Biggers
2020-06-05  7:41       ` Peter Zijlstra
2020-06-05 16:15         ` Eric Biggers
2020-06-06 23:13           ` Guenter Roeck
2020-06-09 20:21             ` Eric Biggers
2020-06-09 21:25               ` Guenter Roeck
2020-06-09 21:38                 ` Eric Biggers
2020-06-09 22:06                   ` Peter Zijlstra
2020-06-09 23:03                     ` Guenter Roeck
2020-06-10  9:09                       ` Peter Zijlstra
2020-06-18 17:57                 ` Steven Rostedt
2020-06-18 19:06                   ` Guenter Roeck
2020-06-09 22:07               ` Peter Zijlstra
2020-06-05  8:10     ` Peter Zijlstra
2020-06-05 13:33       ` Guenter Roeck
2020-06-05 14:09         ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200527163543.GA706478@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=cai@lca.pw \
    --cc=frederic@kernel.org \
    --cc=joel@joelfernandes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=paulmck@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).