linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: jw schultz <jw@pegasys.ws>
To: linux-kernel@vger.kernel.org
Subject: Re: Net device byte statistics
Date: Fri, 25 Jul 2003 17:44:20 -0700	[thread overview]
Message-ID: <20030726004420.GE25838@pegasys.ws> (raw)
In-Reply-To: <3F21C68E.4080209@candelatech.com>

On Fri, Jul 25, 2003 at 05:08:46PM -0700, Ben Greear wrote:
> Jeff Sipek wrote:
> >-----BEGIN PGP SIGNED MESSAGE-----
> >Hash: SHA1
> >
> >On Friday 25 July 2003 17:55, jw schultz wrote:
> >
> 
> >>My thought would be to use 96bits for each counter.  In-kernel
> >>code would run periodically doing something like this:
> >>
> >>	curval = counter.in_kernel;
> >>			/* get it in a register for atomicity */
> >>	if (counter.user_low < curval)
> >>		++counter.user_high;
> >>	counter.user_low = curval;
> 
> What about every 30 seconds or so, detect wraps, and bump the 'high' counter
> if it wraps.  (Check more often if you can wrap more than once in 30 secs).

Yes, how often the component needs run will depend on the
fastest counter.

> Then, upon read by user-space (or whatever needs 64-bit counters):
> 
> 1) check wrap
> 2) grab low bits and OR them with the high bits.
> 3) check wrap again.  If wrap happened, try again.  Assumption is it could 
> never wrap
>    more than once during the time you are checking.

If you need to have userspace get instantaneous values it
would be more efficient to have userspace do the
update_64bit_counter code for just its counter than to have
multiple wrap checks.

> I think this could give us very low overhead, and extremely precise 64-bit
> reads.  And, I think it would not need locks in the fast path..but I could
> also be missing something :)

Per-cpu counters.  If this is done a variant of this for
per-cpu counters would be helpful.  Per-cpu counters have
the advantage of reducing cache-line bouncing.  I don't
think per-cpu counters should be used as a band-aid
(elasto-plast) for counter wrapping.  Besides, how many
12Ghz 4-way hypertheaded (shared cache) CPUs do you need?
And if you only have one should you have per-cpu counters?
I don't think so.  

-- 
________________________________________________________________
	J.W. Schultz            Pegasystems Technologies
	email address:		jw@pegasys.ws

		Remember Cernan and Schmitt

      reply	other threads:[~2003-07-26  0:29 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-07-24 23:56 Net device byte statistics Fredrik Tolf
2003-07-25  0:22 ` Bernd Eckenfels
2003-07-25  2:37   ` Fredrik Tolf
2003-07-25  3:26     ` Bernd Eckenfels
2003-07-25  3:49       ` Jeff Sipek
2003-07-25  3:54     ` Jeff Sipek
2003-07-25  7:03   ` Denis Vlasenko
2003-07-25  7:01     ` Andre Hedrick
2003-07-25 16:23     ` Jeff Sipek
2003-07-25 17:20       ` Randy.Dunlap
2003-07-25 17:55         ` Jeff Sipek
2003-07-25 17:58           ` Randy.Dunlap
2003-07-25 21:55             ` jw schultz
2003-07-25 22:51               ` Jeff Sipek
2003-07-26  0:08                 ` Ben Greear
2003-07-26  0:44                   ` jw schultz [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20030726004420.GE25838@pegasys.ws \
    --to=jw@pegasys.ws \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).