linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Joel Fernandes <joel@joelfernandes.org>
To: "Paul E. McKenney" <paulmck@linux.ibm.com>
Cc: linux-kernel@vger.kernel.org, josh@joshtriplett.org,
	rostedt@goodmis.org, mathieu.desnoyers@efficios.com,
	jiangshanlai@gmail.com
Subject: Re: dyntick-idle CPU and node's qsmask
Date: Tue, 20 Nov 2018 12:42:43 -0800	[thread overview]
Message-ID: <20181120204243.GA22801@google.com> (raw)
In-Reply-To: <20181111183618.GY4170@linux.ibm.com>

On Sun, Nov 11, 2018 at 10:36:18AM -0800, Paul E. McKenney wrote:
> On Sun, Nov 11, 2018 at 10:09:16AM -0800, Joel Fernandes wrote:
> > On Sat, Nov 10, 2018 at 08:22:10PM -0800, Paul E. McKenney wrote:
> > > On Sat, Nov 10, 2018 at 07:09:25PM -0800, Joel Fernandes wrote:
> > > > On Sat, Nov 10, 2018 at 03:04:36PM -0800, Paul E. McKenney wrote:
> > > > > On Sat, Nov 10, 2018 at 01:46:59PM -0800, Joel Fernandes wrote:
> > > > > > Hi Paul and everyone,
> > > > > > 
> > > > > > I was tracing/studying the RCU code today in paul/dev branch and noticed that
> > > > > > for dyntick-idle CPUs, the RCU GP thread is clearing the rnp->qsmask
> > > > > > corresponding to the leaf node for the idle CPU, and reporting a QS on their
> > > > > > behalf.
> > > > > > 
> > > > > > rcu_sched-10    [003]    40.008039: rcu_fqs:              rcu_sched 792 0 dti
> > > > > > rcu_sched-10    [003]    40.008039: rcu_fqs:              rcu_sched 801 2 dti
> > > > > > rcu_sched-10    [003]    40.008041: rcu_quiescent_state_report: rcu_sched 805 5>0 0 0 3 0
> > > > > > 
> > > > > > That's all good but I was wondering if we can do better for the idle CPUs if
> > > > > > we can some how not set the qsmask of the node in the first place. Then no
> > > > > > reporting would be needed of quiescent state is needed for idle CPUs right?
> > > > > > And we would also not need to acquire the rnp lock I think.
> > > > > > 
> > > > > > At least for a single node tree RCU system, it seems that would avoid needing
> > > > > > to acquire the lock without complications. Anyway let me know your thoughts
> > > > > > and happy to discuss this at the hallways of the LPC as well for folks
> > > > > > attending :)
> > > > > 
> > > > > We could, but that would require consulting the rcu_data structure for
> > > > > each CPU while initializing the grace period, thus increasing the number
> > > > > of cache misses during grace-period initialization and also shortly after
> > > > > for any non-idle CPUs.  This seems backwards on busy systems where each
> > > > 
> > > > When I traced, it appears to me that rcu_data structure of a remote CPU was
> > > > being consulted anyway by the rcu_sched thread. So it seems like such cache
> > > > miss would happen anyway whether it is during grace-period initialization or
> > > > during the fqs stage? I guess I'm trying to say, the consultation of remote
> > > > CPU's rcu_data happens anyway.
> > > 
> > > Hmmm...
> > > 
> > > The rcu_gp_init() function does access an rcu_data structure, but it is
> > > that of the current CPU, so shouldn't involve a communications cache miss,
> > > at least not in the common case.
> > > 
> > > Or are you seeing these cross-CPU rcu_data accesses in rcu_gp_fqs() or
> > > functions that it calls?  In that case, please see below.
> > 
> > Yes, it was rcu_implicit_dynticks_qs called from rcu_gp_fqs.
> > 
> > > > > CPU will with high probability report its own quiescent state before three
> > > > > jiffies pass, in which case the cache misses on the rcu_data structures
> > > > > would be wasted motion.
> > > > 
> > > > If all the CPUs are busy and reporting their QS themselves, then I think the
> > > > qsmask is likely 0 so then rcu_implicit_dynticks_qs (called from
> > > > force_qs_rnp) wouldn't be called and so there would no cache misses on
> > > > rcu_data right?
> > > 
> > > Yes, but assuming that all CPUs report their quiescent states before
> > > the first call to rcu_gp_fqs().  One exception is when some CPU is
> > > looping in the kernel for many milliseconds without passing through a
> > > quiescent state.  This is because for recent kernels, cond_resched()
> > > is not a quiescent state until the grace period is something like 100
> > > milliseconds old.  (For older kernels, cond_resched() was never an RCU
> > > quiescent state unless it actually scheduled.)
> > > 
> > > Why wait 100 milliseconds?  Because otherwise the increase in
> > > cond_resched() overhead shows up all too well, causing 0day test robot
> > > to complain bitterly.  Besides, I would expect that in the common case,
> > > CPUs would be executing usermode code.
> > 
> > Makes sense. I was also wondering about this other thing you mentioned about
> > waiting for 3 jiffies before reporting the idle CPU's quiescent state. Does
> > that mean that even if a single CPU is dyntick-idle for a long period of
> > time, then the minimum grace period duration would be atleast 3 jiffies? In
> > our mobile embedded devices, jiffies is set to 3.33ms (HZ=300) to keep power
> > consumption low. Not that I'm saying its an issue or anything (since IIUC if
> > someone wants shorter grace periods, they should just use expedited GPs), but
> > it sounds like it would be shorter GP if we just set the qsmask early on some
> > how and we can manage the overhead of doing so.
> 
> First, there is some autotuning of the delay based on HZ:
> 
> #define RCU_JIFFIES_TILL_FORCE_QS (1 + (HZ > 250) + (HZ > 500))
> 
> So at HZ=300, you should be seeing a two-jiffy delay rather than the
> usual HZ=1000 three-jiffy delay.  Of course, this means that the delay
> is 6.67ms rather than the usual 3ms, but the theory is that lower HZ
> rates often mean slower instruction execution and thus a desire for
> lower RCU overhead.  There is further autotuning based on number of
> CPUs, but this does not kick in until you have 256 CPUs on your system,
> and I bet that smartphones aren't there yet.  Nevertheless, check out
> RCU_JIFFIES_FQS_DIV for more info on this.
> 
> But you can always override this autotuning using the following kernel
> boot paramters:
> 
> rcutree.jiffies_till_first_fqs
> rcutree.jiffies_till_next_fqs

Slightly related, I was just going through your patch in the dev branch "doc:
Now jiffies_till_sched_qs solicits from cond_resched()".

If I understand correctly, what you're trying to do is set
rcu_data.rcu_urgent_qs if you've not heard from the CPU long enough from
rcu_implicit_dynticks_qs.

Then in the other paths, you are reading this value and similuating a dyntick
idle transition even though you may not be really going into dyntick-idle.
Actually in the scheduler-tick, you are also using it to set NEED_RESCHED
appropriately.

Did I get it right so far?

I was thinking if we could simplify rcu_note_context_switch (the parts that
call rcu_momentary_dyntick_idle), if we did the following in
rcu_implicit_dynticks_qs.

Since we already call rcu_qs in rcu_note_context_switch, that would clear the
rdp->cpu_no_qs flag. Then there should be no need to call
rcu_momentary_dyntick_idle from rcu_note_context switch.

I think this would simplify cond_resched as well.  Could this avoid the need
for having an rcu_all_qs at all? Hopefully I didn't some Tasks-RCU corner cases..

Basically for some background, I was thinking can we simplify the code that
calls "rcu_momentary_dyntick_idle" since we already register a qs in other
ways (like by resetting cpu_no_qs).

I should probably start drawing some pictures to make sense of everything,
but do let me know if I have a point ;-) Thanks for your time.

- Joel

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index c818e0c91a81..5aa0259c014d 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1063,7 +1063,7 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp)
 	 * read-side critical section that started before the beginning
 	 * of the current RCU grace period.
 	 */
-	if (rcu_dynticks_in_eqs_since(rdp, rdp->dynticks_snap)) {
+	if (rcu_dynticks_in_eqs_since(rdp, rdp->dynticks_snap) || !rdp->cpu_no_qs.b.norm) {
 		trace_rcu_fqs(rcu_state.name, rdp->gp_seq, rdp->cpu, TPS("dti"));
 		rcu_gpnum_ovf(rnp, rdp);
 		return 1;

  parent reply	other threads:[~2018-11-20 20:43 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-10 21:46 dyntick-idle CPU and node's qsmask Joel Fernandes
2018-11-10 23:04 ` Paul E. McKenney
2018-11-11  3:09   ` Joel Fernandes
2018-11-11  4:22     ` Paul E. McKenney
2018-11-11 18:09       ` Joel Fernandes
2018-11-11 18:36         ` Paul E. McKenney
2018-11-11 21:04           ` Joel Fernandes
2018-11-20 20:42           ` Joel Fernandes [this message]
2018-11-20 22:28             ` Paul E. McKenney
2018-11-20 22:34               ` Paul E. McKenney
2018-11-21  2:06               ` Joel Fernandes
2018-11-21  2:41                 ` Paul E. McKenney
2018-11-21  4:37                   ` Joel Fernandes
2018-11-21 14:39                     ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181120204243.GA22801@google.com \
    --to=joel@joelfernandes.org \
    --cc=jiangshanlai@gmail.com \
    --cc=josh@joshtriplett.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=paulmck@linux.ibm.com \
    --cc=rostedt@goodmis.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).