All of lore.kernel.org
 help / color / mirror / Atom feed
* IRQ Routing.
@ 2003-10-20  8:03 James Courtier-Dutton
  2003-10-20 14:23 ` Martin J. Bligh
  0 siblings, 1 reply; 6+ messages in thread
From: James Courtier-Dutton @ 2003-10-20  8:03 UTC (permalink / raw)
  To: linux-kernel

Hi,

Are their any linux tools to allow the user to view irq routing details, 
and maybe change the routing after boot ?

This might be useful in special cases.

Cheers
James


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: IRQ Routing.
  2003-10-20  8:03 IRQ Routing James Courtier-Dutton
@ 2003-10-20 14:23 ` Martin J. Bligh
  2003-10-20 19:15   ` James Courtier-Dutton
  2003-10-21  5:59   ` IRQ Routing. [A 2.6 question about smp irq balancing] Paul
  0 siblings, 2 replies; 6+ messages in thread
From: Martin J. Bligh @ 2003-10-20 14:23 UTC (permalink / raw)
  To: James Courtier-Dutton, linux-kernel

> Are their any linux tools to allow the user to view irq routing details, 
> and maybe change the routing after boot ?
> 
> This might be useful in special cases.

Yeah, "cat /proc/interrupts" and 
"echo <cpu_bitmask> > /proc/irq/<number>/smp_affinity"

M.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: IRQ Routing.
  2003-10-20 14:23 ` Martin J. Bligh
@ 2003-10-20 19:15   ` James Courtier-Dutton
  2003-10-21  5:59   ` IRQ Routing. [A 2.6 question about smp irq balancing] Paul
  1 sibling, 0 replies; 6+ messages in thread
From: James Courtier-Dutton @ 2003-10-20 19:15 UTC (permalink / raw)
  To: Martin J. Bligh; +Cc: linux-kernel

Martin J. Bligh wrote:
>>Are their any linux tools to allow the user to view irq routing details, 
>>and maybe change the routing after boot ?
>>
>>This might be useful in special cases.
> 
> 
> Yeah, "cat /proc/interrupts" and 
> "echo <cpu_bitmask> > /proc/irq/<number>/smp_affinity"
> 
> M.
> 
> 
> 
I really need more info than that.
I want info like: -
PCI card in slot X, Is using [LNKA] which is being routed via IO-APIC to 
  IRQ Y using Edge triggered Interrupt.

And then I need a tool to be able to change those settings, to for 
example: -
PCI card in slot X, Is using [LNKF] which is being routed via simple PIC 
to  IRQ Y using Level triggered Active low Interrupt.

"lspci -vvvvvv" gives me some of the info, and on a working system, 
/proc/interrupts combined with lspci -vvvvv gives me all I need, but I 
need to be able to tinker with the IRQ rounting after boot up, to test 
what the IRQ settings should be, even if the kernel set them up wrong.

Cheers
James


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: IRQ Routing. [A 2.6 question about smp irq balancing]
  2003-10-20 14:23 ` Martin J. Bligh
  2003-10-20 19:15   ` James Courtier-Dutton
@ 2003-10-21  5:59   ` Paul
  2003-10-21 14:34     ` Martin J. Bligh
  1 sibling, 1 reply; 6+ messages in thread
From: Paul @ 2003-10-21  5:59 UTC (permalink / raw)
  To: Martin J. Bligh; +Cc: linux-kernel

"Martin J. Bligh" <mbligh@aracnet.com>, on Mon Oct 20, 2003 [07:23:44 AM] said:
> > Are their any linux tools to allow the user to view irq routing details, 
> > and maybe change the routing after boot ?
> > 
> > This might be useful in special cases.
> 
> Yeah, "cat /proc/interrupts" and 
> "echo <cpu_bitmask> > /proc/irq/<number>/smp_affinity"
> 
> M.

	Hi;

	Here is my /proc/interrupts for 2.6.0-test6:

           CPU0       CPU1       
  0:      10442 1795758405    IO-APIC-edge  timer
  1:     737209          0    IO-APIC-edge  i8042
  2:          0          0          XT-PIC  cascade
  3:        517          1    IO-APIC-edge  serial
  4:  100488033          1    IO-APIC-edge  serial
  5:          0          0    IO-APIC-edge  eth0
  8:          2          0    IO-APIC-edge  rtc
 12:    1626560          1    IO-APIC-edge  i8042
 14:    8988748          8    IO-APIC-edge  ide0
 15:     377275          1    IO-APIC-edge  ide1
 16:    1364704          2   IO-APIC-level  eth1
 17:     857789     144630   IO-APIC-level  eth2
 19:     165579          1   IO-APIC-level  Ensoniq AudioPCI
NMI: 1795768424 1795763795 
LOC: 1795877980 1795878044 
ERR:          0
MIS:        351

	Now, on 2.4 kernels, both columns are about even.
Upon noticing this, I tried to get more interrupts on cpu1.
You can see I got some on eth2, by running many 'ping -f's at the
same time. I havent been able to get anything more out of ide[01]
for cpu1, even though Ive run many -j4 compiles of huge packages,
which coincided with other things that bring my load up to 8+ over
the course of 20 days of uptime. /proc/irq/<num>/smp_affinity
is 3 (ie. both cpus) for all <num>. I was able to induce interrupts
on cpu1 for ide0 by setting it to 2.
	Ok, my question is, is this how it is supposed to be?
I ask because I am seeing a throughput decrease on 2.6 v 2.4
on hdparm -t, bonnie, kernel compile benchmarks, though interactivity
seems to be better.

Paul
set@pobox.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: IRQ Routing. [A 2.6 question about smp irq balancing]
  2003-10-21  5:59   ` IRQ Routing. [A 2.6 question about smp irq balancing] Paul
@ 2003-10-21 14:34     ` Martin J. Bligh
  2003-10-22  1:21       ` Paul
  0 siblings, 1 reply; 6+ messages in thread
From: Martin J. Bligh @ 2003-10-21 14:34 UTC (permalink / raw)
  To: Paul; +Cc: linux-kernel

>> > Are their any linux tools to allow the user to view irq routing details, 
>> > and maybe change the routing after boot ?
>> > 
>> > This might be useful in special cases.
>> 
>> Yeah, "cat /proc/interrupts" and 
>> "echo <cpu_bitmask> > /proc/irq/<number>/smp_affinity"
>> 
>> M.
> 
> 	Hi;
> 
> 	Here is my /proc/interrupts for 2.6.0-test6:
> 
>            CPU0       CPU1       
>   0:      10442 1795758405    IO-APIC-edge  timer
>   1:     737209          0    IO-APIC-edge  i8042
>   2:          0          0          XT-PIC  cascade
>   3:        517          1    IO-APIC-edge  serial
>   4:  100488033          1    IO-APIC-edge  serial
>   5:          0          0    IO-APIC-edge  eth0
>   8:          2          0    IO-APIC-edge  rtc
>  12:    1626560          1    IO-APIC-edge  i8042
>  14:    8988748          8    IO-APIC-edge  ide0
>  15:     377275          1    IO-APIC-edge  ide1
>  16:    1364704          2   IO-APIC-level  eth1
>  17:     857789     144630   IO-APIC-level  eth2
>  19:     165579          1   IO-APIC-level  Ensoniq AudioPCI
> NMI: 1795768424 1795763795 
> LOC: 1795877980 1795878044 
> ERR:          0
> MIS:        351
> 
> 	Now, on 2.4 kernels, both columns are about even.
> Upon noticing this, I tried to get more interrupts on cpu1.
> You can see I got some on eth2, by running many 'ping -f's at the
> same time. I havent been able to get anything more out of ide[01]
> for cpu1, even though Ive run many -j4 compiles of huge packages,
> which coincided with other things that bring my load up to 8+ over
> the course of 20 days of uptime. /proc/irq/<num>/smp_affinity
> is 3 (ie. both cpus) for all <num>. I was able to induce interrupts
> on cpu1 for ide0 by setting it to 2.
> 	Ok, my question is, is this how it is supposed to be?
> I ask because I am seeing a throughput decrease on 2.6 v 2.4
> on hdparm -t, bonnie, kernel compile benchmarks, though interactivity
> seems to be better.

I doubt that's caused by interrupt load, but try this:

diff -urN linux-2.6.0-test4-clean/arch/i386/kernel/io_apic.c
linux-2.6.0-test4-div10/arch/i386/kernel/io_apic.c
--- linux-2.6.0-test4-clean/arch/i386/kernel/io_apic.c 2003-09-15
06:46:02.000000000 -0700
+++ linux-2.6.0-test4-div10/arch/i386/kernel/io_apic.c 2003-09-15
06:47:17.000000000 -0700
@@ -393,7 +393,7 @@
        unsigned long max_cpu_irq = 0, min_cpu_irq = (~0);
        unsigned long move_this_load = 0;
        int max_loaded = 0, min_loaded = 0;
-       unsigned long useful_load_threshold = balanced_irq_interval +
10;
+       unsigned long useful_load_threshold = (balanced_irq_interval +
10) / 10;
        int selected_irq;
        int tmp_loaded, first_attempt = 1;
        unsigned long tmp_cpu_irq;




^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: IRQ Routing. [A 2.6 question about smp irq balancing]
  2003-10-21 14:34     ` Martin J. Bligh
@ 2003-10-22  1:21       ` Paul
  0 siblings, 0 replies; 6+ messages in thread
From: Paul @ 2003-10-22  1:21 UTC (permalink / raw)
  To: Martin J. Bligh; +Cc: linux-kernel

"Martin J. Bligh" <mbligh@aracnet.com>, on Tue Oct 21, 2003 [07:34:01 AM] said:
[paul said]
> > 	Ok, my question is, is this how it is supposed to be?
> > I ask because I am seeing a throughput decrease on 2.6 v 2.4
> > on hdparm -t, bonnie, kernel compile benchmarks, though interactivity
> > seems to be better.
> 
> I doubt that's caused by interrupt load, but try this:
> 

	Hi;

	Thanks;
	This seems to generate a more of a distribution.
(uptime 3 hours) You are correct that I notice no performance
difference.

Paul
set@pobox.com

           CPU0       CPU1       
  0:    5011222    5035676    IO-APIC-edge  timer
  1:       5441          0    IO-APIC-edge  i8042
  2:          0          0          XT-PIC  cascade
  3:       8203          1    IO-APIC-edge  serial
  4:      58980     356739    IO-APIC-edge  serial
  5:          0          0    IO-APIC-edge  eth0
  8:          2          0    IO-APIC-edge  rtc
 12:       4968       3423    IO-APIC-edge  i8042
 14:     119080      62085    IO-APIC-edge  ide0
 15:        142          0    IO-APIC-edge  ide1
 16:        101          1   IO-APIC-level  eth1
 17:        110      10782   IO-APIC-level  eth2
 19:       1776          0   IO-APIC-level  Ensoniq AudioPCI
NMI:   10044606   10041826 
LOC:   10042436   10042460 
ERR:          0
MIS:          0

> diff -urN linux-2.6.0-test4-clean/arch/i386/kernel/io_apic.c
> linux-2.6.0-test4-div10/arch/i386/kernel/io_apic.c
> --- linux-2.6.0-test4-clean/arch/i386/kernel/io_apic.c 2003-09-15
> 06:46:02.000000000 -0700
> +++ linux-2.6.0-test4-div10/arch/i386/kernel/io_apic.c 2003-09-15
> 06:47:17.000000000 -0700
> @@ -393,7 +393,7 @@
>         unsigned long max_cpu_irq = 0, min_cpu_irq = (~0);
>         unsigned long move_this_load = 0;
>         int max_loaded = 0, min_loaded = 0;
> -       unsigned long useful_load_threshold = balanced_irq_interval +
> 10;
> +       unsigned long useful_load_threshold = (balanced_irq_interval +
> 10) / 10;
>         int selected_irq;
>         int tmp_loaded, first_attempt = 1;
>         unsigned long tmp_cpu_irq;

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2003-10-22  1:20 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-10-20  8:03 IRQ Routing James Courtier-Dutton
2003-10-20 14:23 ` Martin J. Bligh
2003-10-20 19:15   ` James Courtier-Dutton
2003-10-21  5:59   ` IRQ Routing. [A 2.6 question about smp irq balancing] Paul
2003-10-21 14:34     ` Martin J. Bligh
2003-10-22  1:21       ` Paul

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.