linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@us.ibm.com>
To: Manfred Spraul <manfred@colorfullife.com>
Cc: Oleg Nesterov <oleg@tv-sign.ru>,
	linux-kernel@vger.kernel.org,
	Dipankar Sarma <dipankar@in.ibm.com>,
	Andrew Morton <akpm@osdl.org>
Subject: Re: [PATCH] rcu: eliminate rcu_data.last_qsctr
Date: Wed, 5 Jan 2005 10:30:07 -0800	[thread overview]
Message-ID: <20050105183007.GA1272@us.ibm.com> (raw)
In-Reply-To: <41D2CF3B.4040304@colorfullife.com>

On Wed, Dec 29, 2004 at 04:37:31PM +0100, Manfred Spraul wrote:
> Oleg Nesterov wrote:
> 
> >last_qsctr is used in rcu_check_quiescent_state() exclusively.
> >We can reset qsctr at the start of the grace period, and then
> >just test qsctr against 0.
> >
> > 
> >
> It seems the patch got lost, I've updated it a bit and resent it to Andrew.
> 
> But: I think there is the potential for an even larger cleanup, although 
> this would be more a rewrite:
> Get rid of rcu_check_quiescent_state and instead use something like this 
> in rcu_qsctr_inc:
> 
> static inline void rcu_qsctr_inc(int cpu)
> {
>        struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
>        if (rdp->quiescbatch != rcp->cur) {
>             /* a new grace period is running. And we are at a quiescent
>              * point, so complete it
>              */
>             spin_lock(&rsp->lock);
>             rdp->quiescbatch = rcp->cur;
>             cpu_quiet(rdp->cpu, rcp, rsp);
>            spin_unlock(&rsp->lock);
>     }
> }
> 
> It's just an idea, it needs testing on big systems - does reading from 
> the global rcp from every schedule call cause any problems? The cache 
> line is virtually read-only, so it shouldn't cause trashing, but who knows?

Hello, Manfred,

The main concern I have with this is not cache thrashing of rcp->cur,
but shrinking the grace periods on large systems, which can result in
extra overhead per callback, since the shorter grace periods will tend
to have fewer callbacks.  We saw this problem on some of the early
RCU-infrastructure patches.

Another approach would be to conditionally compile the two versions,
though that might make the code more complex.

						Thanx, Paul

      reply	other threads:[~2005-01-05 18:30 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-11-28 17:39 [PATCH] rcu: eliminate rcu_data.last_qsctr Oleg Nesterov
2004-11-29 19:00 ` Manfred Spraul
2004-12-29 15:37 ` Manfred Spraul
2005-01-05 18:30   ` Paul E. McKenney [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20050105183007.GA1272@us.ibm.com \
    --to=paulmck@us.ibm.com \
    --cc=akpm@osdl.org \
    --cc=dipankar@in.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=manfred@colorfullife.com \
    --cc=oleg@tv-sign.ru \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).