linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* how to measure scheduler latency on powerpc?  realfeel doesn't work due to /dev/rtc issues
@ 2003-05-08 22:12 Chris Friesen
  2003-05-09  0:13 ` William Lee Irwin III
  2003-05-09  8:17 ` how to measure scheduler latency on powerpc? realfeel doesn Giuliano Pochini
  0 siblings, 2 replies; 26+ messages in thread
From: Chris Friesen @ 2003-05-08 22:12 UTC (permalink / raw)
  To: linux-kernel


I'm trying to test the scheduler latency on a powerpc platform.  It appears that 
a realfeel type of program won't work since you can't program /dev/rtc to 
generated interrupts on powerpc.  Is there anything similar which could be done?

Thanks,

Chris

-- 
Chris Friesen                    | MailStop: 043/33/F10
Nortel Networks                  | work: (613) 765-0557
3500 Carling Avenue              | fax:  (613) 765-2986
Nepean, ON K2H 8E9 Canada        | email: cfriesen@nortelnetworks.com



^ permalink raw reply	[flat|nested] 26+ messages in thread
* RE: how to measure scheduler latency on powerpc?  realfeel doesn' t work due to /dev/rtc issues
@ 2003-05-10  0:39 Perez-Gonzalez, Inaky
  2003-05-12 23:55 ` William Lee Irwin III
  0 siblings, 1 reply; 26+ messages in thread
From: Perez-Gonzalez, Inaky @ 2003-05-10  0:39 UTC (permalink / raw)
  To: 'Chris Friesen', 'William Lee Irwin III'
  Cc: 'Linux Kernel Mailing List'

[-- Attachment #1: Type: text/plain, Size: 2382 bytes --]


> From: Chris Friesen [mailto:cfriesen@nortelnetworks.com]
> 
> William Lee Irwin III wrote:
> 
> > I don't understand why you're obsessed with interrupts. Just run your
> > load and spray the scheduler latency stats out /proc/
> 
> I'm obsessed with interrupts because it gives me a higher sampling rate.
> 
> I could set up and itimer for a recurring 10ms timeout and see how much
extra I
> waited, but then I can only get 100 samples/sec.
> 
> With /dev/rtc (on intel) you can get 20x more samples in the same amount
of time.

Okay, crazy idea here ...

You are talking about a bladed system, right? So probably you
have two network interfaces in there [it should work only with
one too].

What if you rip off the driver for the network interface and 
create a new breed. Set an special link with a null Ethernet
cable and have one machine sending really short Ethernet frames
to the sampling machine.

Maybe if you can manage to get the Ethernet chip to interrupt
every time a new frame arrives, you can use that as a sampling
measure. I'd say the key would be to have the sending machine
be really precise about the sending ... I guess it can be worked
out.

I don't know how fast an interrupt rate you could get, OTOH 
rough numbers ... let's say 100 MBit/s is 10 MByte/s, use
a really small frame [let's say a few bytes only, 32], add
the MACS {I don't remember the frame format, assuming 12 bytes
for source and destination MACs, plus 8 in overhead [again, I
made it up], 52 bytes ... let's round up to 64 bytes per frame.

So

10 MB/s / 64 B/frame = 163840 frames/s

I don't know how really possible is this or my calculations
are screwed up, but it might be worth a try ...

I did a quick test; from one of my computers, m1, I did:

m1:~ $ while true; do cat BIGFILE; done | ssh m2 cat > /dev/null

while on m2, I did:

m2:~ $ grep eth0 /proc/interrupts; sleep 2m; grep eth0 /proc/interrupts
 18:      77457      68483   IO-APIC-level  eth0
 18:     397390     412559   IO-APIC-level  eth0
m2:~ $ 

total    319933  +  344076   = 664009
in 120 seconds ... 664009 / 120 = 5533 Hz ~ 2500 Hz per CPU.

not bad, wouldn't this work?

[this is with a 1500 MTU through a hub ... or a switch, I
don't really know ...]

Iñaky Pérez-González -- Not speaking for Intel -- all opinions are my own
(and my fault)


[-- Attachment #2: t.txt --]
[-- Type: text/plain, Size: 205 bytes --]

 18:      77457      68483   IO-APIC-level  eth0
 18:     397390     412559   IO-APIC-level  eth0

total    319933  +  344076   = 664009
in 120 seconds ... 664009 / 120 = 5533 Hz ~ 2500 Hz per CPU.


^ permalink raw reply	[flat|nested] 26+ messages in thread
[parent not found: <493798056@toto.iv>]
* RE: how to measure scheduler latency on powerpc?  realfeel doesn' t work due to /dev/rtc issues
@ 2003-05-13  0:20 Perez-Gonzalez, Inaky
  2003-05-13  1:02 ` William Lee Irwin III
  0 siblings, 1 reply; 26+ messages in thread
From: Perez-Gonzalez, Inaky @ 2003-05-13  0:20 UTC (permalink / raw)
  To: 'William Lee Irwin III'
  Cc: 'Chris Friesen', 'Linux Kernel Mailing List'

> From: William Lee Irwin III [mailto:wli@holomorphy.com]
> 
> William Lee Irwin III wrote:
> >>> I don't understand why you're obsessed with interrupts. Just run your
> >>> load and spray the scheduler latency stats out /proc/
> 
> From: Chris Friesen [mailto:cfriesen@nortelnetworks.com]
> >> I'm obsessed with interrupts because it gives me a higher sampling
rate.
> >> I could set up and itimer for a recurring 10ms timeout and see how much
> >> extra I waited, but then I can only get 100 samples/sec. With
> >> /dev/rtc (on intel) you can get 20x more samples in the same amount
> >> of time.
> 
> On Fri, May 09, 2003 at 05:39:03PM -0700, Perez-Gonzalez, Inaky wrote:
> > Okay, crazy idea here ...
> > You are talking about a bladed system, right? So probably you
> > have two network interfaces in there [it should work only with
> > one too].
> > What if you rip off the driver for the network interface and
> > create a new breed. Set an special link with a null Ethernet
> > cable and have one machine sending really short Ethernet frames
> 
> This is ridiculous. Just make sure you're not sharing interrupts and
> count cycles starting at the ISR instead of wakeup and tag events
> properly if you truly believe that to be your metric. You, as the
> kernel, are notified whenever the interrupts occur and can just look
> at the time of day and cycle counts.

Well, I am only suggesting a way to _FORCE_ interrupts to happen
at a certain rate controllable by _SOMEBODY_, not as the system
gets them. Chris was concerned about not having a way to 
_GENERATE_ interrupts at a certain rate.

What you are suggesting is the other part of the picture, how to
measure the latency and AFAICS, it is not part of the problem of
generating the interrupts.

Iñaky Pérez-González -- Not speaking for Intel -- all opinions are my own
(and my fault)

^ permalink raw reply	[flat|nested] 26+ messages in thread
* RE: how to measure scheduler latency on powerpc?  realfeel doesn' t work due to /dev/rtc issues
@ 2003-05-13  2:08 Perez-Gonzalez, Inaky
  0 siblings, 0 replies; 26+ messages in thread
From: Perez-Gonzalez, Inaky @ 2003-05-13  2:08 UTC (permalink / raw)
  To: 'William Lee Irwin III'
  Cc: 'Chris Friesen', 'Linux Kernel Mailing List'

> From: William Lee Irwin III [mailto:wli@holomorphy.com]
> 
> From: William Lee Irwin III [mailto:wli@holomorphy.com]
> >> This is ridiculous. Just make sure you're not sharing interrupts and
> >> count cycles starting at the ISR instead of wakeup and tag events
> >> properly if you truly believe that to be your metric. You, as the
> >> kernel, are notified whenever the interrupts occur and can just look
> >> at the time of day and cycle counts.
> 
> On Mon, May 12, 2003 at 05:20:39PM -0700, Perez-Gonzalez, Inaky wrote:
> > Well, I am only suggesting a way to _FORCE_ interrupts to happen
> > at a certain rate controllable by _SOMEBODY_, not as the system
> > gets them. Chris was concerned about not having a way to
> > _GENERATE_ interrupts at a certain rate.
> > What you are suggesting is the other part of the picture, how to
> > measure the latency and AFAICS, it is not part of the problem of
> > generating the interrupts.
> 
> It also seems somewhat pointless to measure it under artificial
> conditions. Interrupts happen often anyway and you probably want to

Your artificial conditions are your control measurements. Then you
add the loads in the background; by being able to selectively add
and remove loads (the real live loads), then you can more easily
identify who is causing delays and under what conditions. It is
not as thorough as a full code analysis ... but if your coverage
is well done can help a lot.

But I am sure you know all this already.

Iñaky Pérez-González -- Not speaking for Intel -- all opinions are my own
(and my fault)

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2003-05-13  1:55 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-05-08 22:12 how to measure scheduler latency on powerpc? realfeel doesn't work due to /dev/rtc issues Chris Friesen
2003-05-09  0:13 ` William Lee Irwin III
2003-05-09  0:38   ` Davide Libenzi
2003-05-09  0:38     ` William Lee Irwin III
2003-05-09  0:56       ` Richard B. Johnson
2003-05-09  3:52         ` Chris Friesen
2003-05-09  4:13           ` Roland Dreier
2003-05-09  6:07             ` Chris Friesen
2003-05-09  4:26           ` William Lee Irwin III
2003-05-09  6:14             ` Chris Friesen
2003-05-09  6:20               ` William Lee Irwin III
2003-05-09  6:53                 ` Chris Friesen
2003-05-09  7:01                   ` William Lee Irwin III
2003-05-09 16:47                     ` Robert Love
2003-05-09 16:53                       ` William Lee Irwin III
2003-05-09 17:38                         ` Chris Friesen
2003-05-09 11:37               ` paubert
2003-05-09  8:23             ` mikpe
2003-05-09  8:17 ` how to measure scheduler latency on powerpc? realfeel doesn Giuliano Pochini
2003-05-10  0:39 how to measure scheduler latency on powerpc? realfeel doesn' t work due to /dev/rtc issues Perez-Gonzalez, Inaky
2003-05-12 23:55 ` William Lee Irwin III
     [not found] <493798056@toto.iv>
2003-05-12  5:04 ` how to measure scheduler latency on powerpc? realfeel doesn't " Peter Chubb
2003-05-12  5:08   ` William Lee Irwin III
2003-05-13  0:20 how to measure scheduler latency on powerpc? realfeel doesn' t " Perez-Gonzalez, Inaky
2003-05-13  1:02 ` William Lee Irwin III
2003-05-13  2:08 Perez-Gonzalez, Inaky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).