linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@us.ibm.com>
To: Jack Steiner <steiner@sgi.com>
Cc: Manfred Spraul <manfred@dbl.q-ag.de>,
	linux-kernel@vger.kernel.org, lse-tech@lists.sourceforge.net
Subject: Re: [Lse-tech] [RFC, PATCH] 1/5 rcu lock update: Add per-cpu batch counter
Date: Wed, 26 May 2004 11:31:51 -0700	[thread overview]
Message-ID: <20040526183151.GT1295@us.ibm.com> (raw)
In-Reply-To: <20040525203215.GB5127@sgi.com>

On Tue, May 25, 2004 at 03:32:15PM -0500, Jack Steiner wrote:
> On Tue, May 25, 2004 at 07:35:20AM +0200, Manfred Spraul wrote:
> > Hi,
> > 
> > Step one for reducing cacheline trashing within rcupdate.c:
> > 
> > The current code uses the rcu_cpu_mask bitmap both for keeping track
> > of the cpus that haven't gone through a quiescent state and for
> > checking if a cpu should look for quiescent states. The bitmap
> > is frequently changed and the check is done by polling - 
> > together this causes cache line trashing.
> > 
> .....
> 
> 
> 
> It looks like the patch fixes the problem that I saw.

Impressive results!  Am reviewing the patch set.

Was this using all of Manfred's patches, and, if not, which ones?

							Thanx, Paul

> I ran a 2.6.6+rcupatch kernel on a 512p system. Previously, this system
> showed ALL cpus becoming ~50% busy when running a "ls" loop on a single
> cpu.  This behavior is now fixed.
> 
> The following shows the system overhead on cpus 0-9 (higher cpus are
> similar):
> 
> 
> idle system
>   CPU    0      1      2      3      4      5      6      7      8      9
>       0.70   0.21   0.19   0.17   0.21   0.20   0.20   0.19   0.18   0.19
>       0.72   0.21   0.18   0.17   0.21   0.21   0.21   0.20   0.18   0.19
>       0.70   0.20   0.18   0.17   0.22   0.20   0.19   0.20   0.19   0.20
>       0.71   0.19   0.18   0.17   0.21   0.20   0.19   0.20   0.18   0.19
>       0.71   0.23   0.19   0.17   0.21   0.20   0.19   0.20   0.19   0.20
> 
> 
> runnin "ls" loop on cpu 4 (the number for cpu4 is the ls command - NOT overhead)
>   CPU    0      1      2      3      4      5      6      7      8      9
>       0.83   2.08   0.31   0.29  97.87   1.25   0.32   0.32   0.32   0.33
>       0.88   0.52   0.30   0.28  96.46   1.32   0.32   0.23   0.31   0.30
>       0.84   1.27   0.31   0.29  97.15   1.38   0.33   0.31   0.31   0.31
>       0.83   2.81   0.32   0.30  98.61   1.14   0.31   0.33   0.32   0.37
>       0.84   2.32   0.32   0.40  97.91   1.43   0.33   0.43   0.32   0.36
> 
> There is a small increase in system overhead on all cpus but not the 50% seen
> earlier.
> 
> I dont understand, however, why the overhead increased on cpus 1 & 5. This
> may be a test anomaly but I suspect something else. I'll look into that later.
> 
> I also noticed one other anomaly. /proc/interrupts shows the number of interrupts
> for each cpu. Under most circumstances, /proc/interrupts shows 1024 timer
> interrupts/sec on each cpu. When the "ls" script is running, the number 
> of timer interrupts/sec on the cpu running "ls" drops to ~650. I'm not
> sure why (perhaps running too long somewhere with ints disabled). 
> 
> I'll look further....
> 
> 
> 
> 
> -- 
> Thanks
> 
> Jack Steiner (steiner@sgi.com)          651-683-5302
> Principal Engineer                      SGI - Silicon Graphics, Inc.
> 
> 
> 
> 
> -------------------------------------------------------
> This SF.Net email is sponsored by: Oracle 10g
> Get certified on the hottest thing ever to hit the market... Oracle 10g. 
> Take an Oracle 10g class now, and we'll give you the exam FREE.
> http://ads.osdn.com/?ad_id=3149&alloc_id=8166&op=click
> _______________________________________________
> Lse-tech mailing list
> Lse-tech@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/lse-tech
> 

      reply	other threads:[~2004-05-26 18:33 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-05-25  5:35 [RFC, PATCH] 1/5 rcu lock update: Add per-cpu batch counter Manfred Spraul
2004-05-25 20:32 ` [Lse-tech] " Jack Steiner
2004-05-26 18:31   ` Paul E. McKenney [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20040526183151.GT1295@us.ibm.com \
    --to=paulmck@us.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lse-tech@lists.sourceforge.net \
    --cc=manfred@dbl.q-ag.de \
    --cc=steiner@sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).