All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Manfred Spraul <manfred@colorfullife.com>
Cc: linux-kernel@vger.kernel.org, cl@linux-foundation.org,
	mingo@elte.hu, akpm@linux-foundation.org, dipankar@in.ibm.com,
	josht@linux.vnet.ibm.com, schamp@sgi.com, niv@us.ibm.com,
	dvhltc@us.ibm.com, ego@in.ibm.com, laijs@cn.fujitsu.com,
	rostedt@goodmis.org, peterz@infradead.org,
	penberg@cs.helsinki.fi, andi@firstfloor.org, tglx@linutronix.de
Subject: Re: [PATCH, RFC] v7 scalable classic RCU implementation
Date: Sun, 12 Oct 2008 15:46:29 -0700	[thread overview]
Message-ID: <20081012224629.GA7353@linux.vnet.ibm.com> (raw)
In-Reply-To: <48F21D58.3000404@colorfullife.com>

On Sun, Oct 12, 2008 at 05:52:56PM +0200, Manfred Spraul wrote:
> Paul E. McKenney wrote:
>> +/*
>> + * If the specified CPU is offline, tell the caller that it is in
>> + * a quiescent state.  Otherwise, whack it with a reschedule IPI.
>> + * Grace periods can end up waiting on an offline CPU when that
>> + * CPU is in the process of coming online -- it will be added to the
>> + * rcu_node bitmasks before it actually makes it online.  Because this
>> + * race is quite rare, we check for it after detecting that the grace
>> + * period has been delayed rather than checking each and every CPU
>> + * each and every time we start a new grace period.
>> + */
>
> What about using CPU_DYING and CPU_STARTING?
>
> Then this race wouldn't exist anymore.

Because I don't want to tie RCU too tightly to the details of the
online/offline implementation.  It is too easy for someone to make a
"simple" change and break things, especially given that the online/offline
code still seems to be adjusting a bit.

So I might well use CPU_DYING and CPU_STARTING, but I would still keep
the check offlined CPUs in the force_quiescent_state() processing.

>> +static void force_quiescent_state(struct rcu_state *rsp, int relaxed)
>> +{
>> + [snip]
>> +       case RCU_FORCE_QS:
>> +
>> +               /* Check dyntick-idle state, send IPI to laggarts. */
>> +               if (rcu_process_dyntick(rsp, 
>> dyntick_recall_completed(rsp),
>> +                                       rcu_implicit_dynticks_qs))
>> +                       goto unlock_ret;
>> +
>> +               /* Leave state in case more forcing is required. */
>> +
>> +               break;
>
> Hmm - your code must loop multiple times over the cpus.
> I've use a different approach: More forcing is only required for a nohz cpu 
> when it was hit within a long-running interrupt.
> Thus I've added a '->kick_poller' flag, rcu_irq_exit() reports back when 
> the long-running interrupt completes. Never more than one loop over the 
> outstanding cpus is required.

Do you send a reschedule IPI to CPUs that are not in dyntick idle mode,
but who have failed to pass through a quiescent state?

In my case, more forcing is required only for a nohz CPU in a long-running
interrupt (as with your approach), for sending the aforementioned IPI,
and for checking for offlined CPUs as noted above.

							Thanx, Paul

  reply	other threads:[~2008-10-12 22:46 UTC|newest]

Thread overview: 94+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-08-21 23:43 [PATCH, RFC, tip/core/rcu] scalable classic RCU implementation Paul E. McKenney
2008-08-22  4:37 ` Ingo Molnar
2008-08-22 13:47   ` Paul E. McKenney
2008-08-22 17:22     ` Paul E. McKenney
2008-08-22 18:16       ` Josh Triplett
2008-08-23 16:07       ` Ingo Molnar
2008-08-24  2:44         ` Paul E. McKenney
2008-08-22 23:29 ` Josh Triplett
2008-08-23  1:53   ` Paul E. McKenney
2008-08-25 22:02     ` Josh Triplett
2008-08-26 16:05       ` Paul E. McKenney
2008-08-27  0:38         ` Josh Triplett
2008-08-27 18:34           ` Paul E. McKenney
2008-08-27 20:23             ` Josh Triplett
2008-08-27 20:41               ` Paul E. McKenney
2008-08-25 10:34   ` Peter Zijlstra
2008-08-25 15:16     ` Paul E. McKenney
2008-08-25 15:26       ` Peter Zijlstra
2008-08-27 18:28         ` Paul E. McKenney
2008-08-24  8:08 ` Manfred Spraul
2008-08-24 16:32   ` Paul E. McKenney
2008-08-24 18:25     ` Manfred Spraul
2008-08-24 21:19       ` Paul E. McKenney
2008-08-25  0:07 ` [PATCH, RFC, tip/core/rcu] v2 " Paul E. McKenney
2008-08-30  0:49   ` [PATCH, RFC, tip/core/rcu] v3 " Paul E. McKenney
2008-08-30  9:33     ` Peter Zijlstra
2008-08-30 14:10       ` Paul E. McKenney
2008-08-30 15:40         ` Peter Zijlstra
2008-08-30 19:38           ` Paul E. McKenney
2008-09-02 13:26           ` Mathieu Desnoyers
2008-09-02 13:41             ` Peter Zijlstra
2008-09-02 14:55               ` Paul E. McKenney
2008-08-30  9:58     ` Lai Jiangshan
2008-08-30 13:32       ` Manfred Spraul
2008-08-30 14:34         ` Paul E. McKenney
2008-08-31 10:58           ` Manfred Spraul
2008-08-31 17:20             ` Paul E. McKenney
2008-08-31 17:45               ` Manfred Spraul
2008-08-31 17:55                 ` Paul E. McKenney
2008-08-31 18:18                   ` Manfred Spraul
2008-08-31 19:23                     ` Paul E. McKenney
2008-08-30 14:29       ` Paul E. McKenney
2008-09-01  9:38     ` Andi Kleen
2008-09-02  1:05       ` Paul E. McKenney
2008-09-02  6:18         ` Andi Kleen
2008-09-05 15:29     ` [PATCH, RFC] v4 " Paul E. McKenney
2008-09-05 19:33       ` Andrew Morton
2008-09-05 23:04         ` Paul E. McKenney
2008-09-05 23:52           ` Andrew Morton
2008-09-06  4:16             ` Paul E. McKenney
2008-09-06 16:37       ` Manfred Spraul
2008-09-07 17:25         ` Paul E. McKenney
2008-09-07 10:18       ` [RFC, PATCH] Add a CPU_STARTING notifier (was: Re: [PATCH, RFC] v4 scalable classic RCU implementation) Manfred Spraul
2008-09-07 11:07         ` Andi Kleen
2008-09-07 19:46         ` Paul E. McKenney
2008-09-15 16:02       ` [PATCH, RFC] v4 scalable classic RCU implementation Paul E. McKenney
2008-09-16 16:52         ` Manfred Spraul
2008-09-16 17:30           ` Paul E. McKenney
2008-09-16 17:48             ` Manfred Spraul
2008-09-16 18:22               ` Paul E. McKenney
2008-09-21 11:09               ` Manfred Spraul
2008-09-21 21:14                 ` Paul E. McKenney
2008-09-23 23:53         ` [PATCH, RFC] v6 " Paul E. McKenney
2008-09-25  7:26           ` Ingo Molnar
2008-09-25 14:05             ` Paul E. McKenney
2008-09-25  7:29           ` Ingo Molnar
2008-09-25 14:18             ` Paul E. McKenney
2008-10-10 16:09           ` [PATCH, RFC] v7 " Paul E. McKenney
2008-10-12 15:52             ` Manfred Spraul
2008-10-12 22:46               ` Paul E. McKenney [this message]
2008-10-13 18:03                 ` Manfred Spraul
2008-10-15  1:11                   ` Paul E. McKenney
2008-10-15  8:13                     ` Manfred Spraul
2008-10-15 15:26                       ` Paul E. McKenney
2008-10-22 18:41                         ` Manfred Spraul
2008-10-22 21:02                           ` Paul E. McKenney
2008-10-22 21:24                             ` Manfred Spraul
2008-10-27 16:45                               ` Paul E. McKenney
2008-10-27 19:48                                 ` Manfred Spraul
2008-10-27 23:52                                   ` Paul E. McKenney
2008-10-28  5:30                                     ` Manfred Spraul
2008-10-28 15:17                                       ` Paul E. McKenney
2008-10-28 17:21                                         ` Manfred Spraul
2008-10-28 17:35                                           ` Paul E. McKenney
2008-10-17  8:34             ` Gautham R Shenoy
2008-10-17 15:35               ` Gautham R Shenoy
2008-10-17 15:46                 ` Paul E. McKenney
2008-10-17 15:43               ` Paul E. McKenney
2008-12-08 18:42               ` Paul E. McKenney
2008-11-02 20:10             ` Manfred Spraul
2008-11-03 20:33               ` Paul E. McKenney
2008-11-05 19:48                 ` Manfred Spraul
2008-11-05 21:27                   ` Paul E. McKenney
2008-11-15 23:20             ` [PATCH, RFC] v8 " Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20081012224629.GA7353@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=andi@firstfloor.org \
    --cc=cl@linux-foundation.org \
    --cc=dipankar@in.ibm.com \
    --cc=dvhltc@us.ibm.com \
    --cc=ego@in.ibm.com \
    --cc=josht@linux.vnet.ibm.com \
    --cc=laijs@cn.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=manfred@colorfullife.com \
    --cc=mingo@elte.hu \
    --cc=niv@us.ibm.com \
    --cc=penberg@cs.helsinki.fi \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=schamp@sgi.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.