linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC/PATCHv2] x86/irq: round-robin distribution of irqs to cpus w/in node
@ 2010-09-27 20:34 Arthur Kepner
  2010-09-27 20:46 ` Thomas Gleixner
  0 siblings, 1 reply; 11+ messages in thread
From: Arthur Kepner @ 2010-09-27 20:34 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, x86


(Fixed a small error with yesterday's version, and add x86@kernel.org 
to cc list.)

SGI has encountered situations where particular CPUs run out of
interrupt vectors on systems with many (several hundred or more)
CPUs. This happens because some drivers (particularly the mlx4_core
driver) select the number of interrupts they allocate based on the
number of CPUs, and because of how the default irq affinity is used.

Do  psuedo round-robin distribution of irqs to CPUs within a node 
to avoid (or at least delay) running out of vectors on any particular 
CPU.

Signed-off-by: Arthur Kepner <akepner@sgi.com>
---
 arch/x86/kernel/apic/io_apic.c |   28 ++++++++++++++++++++++++++--
 1 file changed, 26 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index f1efeba..609a001 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -3254,6 +3254,8 @@ unsigned int create_irq_nr(unsigned int irq_want, int node)
 
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	for (new = irq_want; new < nr_irqs; new++) {
+		cpumask_var_t tmp_mask;
+
 		desc_new = irq_to_desc_alloc_node(new, node);
 		if (!desc_new) {
 			printk(KERN_INFO "can not get irq_desc for %d\n", new);
@@ -3267,8 +3269,30 @@ unsigned int create_irq_nr(unsigned int irq_want, int node)
 		desc_new = move_irq_desc(desc_new, node);
 		cfg_new = desc_new->chip_data;
 
-		if (__assign_irq_vector(new, cfg_new, apic->target_cpus()) == 0)
-			irq = new;
+		if ((node != -1) && alloc_cpumask_var(&tmp_mask, GFP_ATOMIC)) {
+
+			static int cpu;
+
+			/* try to place irq on a cpu in the node in psuedo-
+			 * round robin order*/
+
+			cpu = __next_cpu_nr(cpu, cpumask_of_node(node));
+			if (cpu >= nr_cpu_ids)
+				cpu = cpumask_first(cpumask_of_node(node));
+
+			cpumask_set_cpu(cpu, tmp_mask);
+
+			if (cpumask_test_cpu(cpu, apic->target_cpus()) &&
+			    __assign_irq_vector(new, cfg_new, tmp_mask) == 0)
+				irq = new;
+
+			free_cpumask_var(tmp_mask);
+		}
+
+		if (irq == 0)
+			if (__assign_irq_vector(new, cfg_new,
+						apic->target_cpus()) == 0)
+				irq = new;
 		break;
 	}
 	raw_spin_unlock_irqrestore(&vector_lock, flags);

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [RFC/PATCHv2] x86/irq: round-robin distribution of irqs to cpus w/in node
  2010-09-27 20:34 [RFC/PATCHv2] x86/irq: round-robin distribution of irqs to cpus w/in node Arthur Kepner
@ 2010-09-27 20:46 ` Thomas Gleixner
  2010-09-27 22:01   ` Arthur Kepner
  0 siblings, 1 reply; 11+ messages in thread
From: Thomas Gleixner @ 2010-09-27 20:46 UTC (permalink / raw)
  To: Arthur Kepner; +Cc: linux-kernel, x86

On Mon, 27 Sep 2010, Arthur Kepner wrote:

> 
> (Fixed a small error with yesterday's version, and add x86@kernel.org 
> to cc list.)
> 
> SGI has encountered situations where particular CPUs run out of
> interrupt vectors on systems with many (several hundred or more)
> CPUs. This happens because some drivers (particularly the mlx4_core
> driver) select the number of interrupts they allocate based on the
> number of CPUs, and because of how the default irq affinity is used.
> 
> Do  psuedo round-robin distribution of irqs to CPUs within a node 
> to avoid (or at least delay) running out of vectors on any particular 
> CPU.

Sigh. Why is this a x86 specific problem ?

If we setup an irq on a node then we should set the affinity to the
target node in general. The round robin inside the node is really not
a problem unless you hit:

   nr_irqs_per_node * nr_cpus_per_node > max_vectors_per_cpu

If that's the case then we probably have some more severe problems.

Again, I agree that we should target the irq to the node, but the fine
grained details can be done in user space.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC/PATCHv2] x86/irq: round-robin distribution of irqs to cpus w/in node
  2010-09-27 20:46 ` Thomas Gleixner
@ 2010-09-27 22:01   ` Arthur Kepner
  2010-09-27 22:12     ` Thomas Gleixner
  0 siblings, 1 reply; 11+ messages in thread
From: Arthur Kepner @ 2010-09-27 22:01 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: linux-kernel, x86

On Mon, Sep 27, 2010 at 10:46:02PM +0200, Thomas Gleixner wrote:
> ...
> Sigh. Why is this a x86 specific problem ?
>

It's obviously not. But we're particularly seeing it on x86 
systems, so an x86-specific fix would address our problem.
 
> If we setup an irq on a node then we should set the affinity to the
> target node in general. 

OK.

> .... The round robin inside the node is really not
> a problem unless you hit:
> 
>    nr_irqs_per_node * nr_cpus_per_node > max_vectors_per_cpu
> 

No, I don't think that's true. 

The problem we're seeing is that one driver asks for a large 
number of interrupts (on no CPU in particular). And because of the 
way that the vectors are initially assigned to CPUs (in 
__assign_irq_vector()), a particular CPU can have all its vectors 
consumed. 

Now, a second driver comes along, and requests an interrupt on 
a specific CPU, N. But CPU N is out of interrupts, so that driver 
fails.

This all happens before a user-space irq balancer is available.

-- 
Arthur

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC/PATCHv2] x86/irq: round-robin distribution of irqs to cpus w/in node
  2010-09-27 22:01   ` Arthur Kepner
@ 2010-09-27 22:12     ` Thomas Gleixner
  2010-09-28  0:17       ` Eric W. Biederman
  0 siblings, 1 reply; 11+ messages in thread
From: Thomas Gleixner @ 2010-09-27 22:12 UTC (permalink / raw)
  To: Arthur Kepner; +Cc: linux-kernel, x86

On Mon, 27 Sep 2010, Arthur Kepner wrote:

> On Mon, Sep 27, 2010 at 10:46:02PM +0200, Thomas Gleixner wrote:
> > ...
> > Sigh. Why is this a x86 specific problem ?
> >
> 
> It's obviously not. But we're particularly seeing it on x86 
> systems, so an x86-specific fix would address our problem.

Even more sigh.
  
> > If we setup an irq on a node then we should set the affinity to the
> > target node in general. 
> 
> OK.
> 
> > .... The round robin inside the node is really not
> > a problem unless you hit:
> > 
> >    nr_irqs_per_node * nr_cpus_per_node > max_vectors_per_cpu
> > 
> 
> No, I don't think that's true. 
> 
> The problem we're seeing is that one driver asks for a large 
> number of interrupts (on no CPU in particular). And because of the 

It does it for a node, dammit. Otherwise your patch would be
absolutely useless.

> > > +               if ((node != -1) && alloc_cpumask_var(&tmp_mask, GFP_ATOMIC)) {

> way that the vectors are initially assigned to CPUs (in 
> __assign_irq_vector()), a particular CPU can have all its vectors 
> consumed. 

Stop selling me crap already.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC/PATCHv2] x86/irq: round-robin distribution of irqs to cpus w/in node
  2010-09-27 22:12     ` Thomas Gleixner
@ 2010-09-28  0:17       ` Eric W. Biederman
  2010-09-28  8:08         ` Thomas Gleixner
  0 siblings, 1 reply; 11+ messages in thread
From: Eric W. Biederman @ 2010-09-28  0:17 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Arthur Kepner, linux-kernel, x86

Thomas Gleixner <tglx@linutronix.de> writes:

> On Mon, 27 Sep 2010, Arthur Kepner wrote:
>
>> On Mon, Sep 27, 2010 at 10:46:02PM +0200, Thomas Gleixner wrote:
>> > ...
>> > Sigh. Why is this a x86 specific problem ?
>> >
>> 
>> It's obviously not. But we're particularly seeing it on x86 
>> systems, so an x86-specific fix would address our problem.
>
> Even more sigh.

The fact that x86 has vectors probably doesn't help.

>> > If we setup an irq on a node then we should set the affinity to the
>> > target node in general. 
>> 
>> OK.
>> 
>> > .... The round robin inside the node is really not
>> > a problem unless you hit:
>> > 
>> >    nr_irqs_per_node * nr_cpus_per_node > max_vectors_per_cpu
>> > 
>> 
>> No, I don't think that's true. 
>> 
>> The problem we're seeing is that one driver asks for a large 
>> number of interrupts (on no CPU in particular). And because of the 
>
> It does it for a node, dammit. Otherwise your patch would be
> absolutely useless.

We derive a node from where the device is plugged in.  The driver
does not specify a node.

>> > > +               if ((node != -1) && alloc_cpumask_var(&tmp_mask, GFP_ATOMIC)) {
>
>> way that the vectors are initially assigned to CPUs (in 
>> __assign_irq_vector()), a particular CPU can have all its vectors 
>> consumed. 
>
> Stop selling me crap already.

The deep bug is that create_irq_nr allocates a vector (which it does
because at the time there was no better way to mark an irq in use on
x86).  In the case of msi-x we really don't know the node that irq is
going to be used on until we get a request irq.  We simply know which
node the device is on.

If you want to see what is going follow the call trace looks like.
pci_enable_msix 
  arch_setup_msi_irqs
    create_irq_nr

After pci_enable_msix is finished then the driver goes and makes all
of the irqs per cpu irqs.

There are goofy things that happen when hardware asks for 1 irq per cpu.
But since msi can ask for up to 4096 irqs (assuming the hardware
supports it) I can totally see putting all 256 of those irqs on a single
cpu, before you go to user space and let user space or something
reassign all of those irqs in a per cpu way.

My gut feel says that the real answer is to delay assigning a vector
to an irq until request_irq().  At which point we will know that someone
at least wants to use the irq.

Eric


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC/PATCHv2] x86/irq: round-robin distribution of irqs to cpus w/in node
  2010-09-28  0:17       ` Eric W. Biederman
@ 2010-09-28  8:08         ` Thomas Gleixner
  2010-09-28 10:59           ` Eric W. Biederman
  0 siblings, 1 reply; 11+ messages in thread
From: Thomas Gleixner @ 2010-09-28  8:08 UTC (permalink / raw)
  To: Eric W. Biederman; +Cc: Arthur Kepner, linux-kernel, x86

On Mon, 27 Sep 2010, Eric W. Biederman wrote:
> > On Mon, 27 Sep 2010, Arthur Kepner wrote:
> The deep bug is that create_irq_nr allocates a vector (which it does
> because at the time there was no better way to mark an irq in use on
> x86).  In the case of msi-x we really don't know the node that irq is
> going to be used on until we get a request irq.  We simply know which
> node the device is on.

Bah. So the whole per node allocation business is completely useless
at this point.

> If you want to see what is going follow the call trace looks like.
> pci_enable_msix 
>   arch_setup_msi_irqs
>     create_irq_nr
> 
> After pci_enable_msix is finished then the driver goes and makes all
> of the irqs per cpu irqs.
> 
> There are goofy things that happen when hardware asks for 1 irq per cpu.
> But since msi can ask for up to 4096 irqs (assuming the hardware
> supports it) I can totally see putting all 256 of those irqs on a single
> cpu, before you go to user space and let user space or something
> reassign all of those irqs in a per cpu way.
> 
> My gut feel says that the real answer is to delay assigning a vector
> to an irq until request_irq().  At which point we will know that someone
> at least wants to use the irq.

Right. So the solution would be:

create_irq allocates an irq number + irq descriptor, nothing else

chip->startup() will setup the vector and chip->shutdown releases
it. That requires to change the return value of chip->startup to int,
so we can return an error code, but that can be done in course of the
overhaul I'm working on. 

Right now I prefer not to add more crap to io_apic.c, it's horrible
enough already. I'll fix that with the cleanup.

Thanks

	tglx



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC/PATCHv2] x86/irq: round-robin distribution of irqs to cpus w/in node
  2010-09-28  8:08         ` Thomas Gleixner
@ 2010-09-28 10:59           ` Eric W. Biederman
  2010-09-29 17:19             ` Arthur Kepner
  2010-10-17 10:44             ` Thomas Gleixner
  0 siblings, 2 replies; 11+ messages in thread
From: Eric W. Biederman @ 2010-09-28 10:59 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Arthur Kepner, linux-kernel, x86

Thomas Gleixner <tglx@linutronix.de> writes:

> On Mon, 27 Sep 2010, Eric W. Biederman wrote:
>> > On Mon, 27 Sep 2010, Arthur Kepner wrote:
>> The deep bug is that create_irq_nr allocates a vector (which it does
>> because at the time there was no better way to mark an irq in use on
>> x86).  In the case of msi-x we really don't know the node that irq is
>> going to be used on until we get a request irq.  We simply know which
>> node the device is on.
>
> Bah. So the whole per node allocation business is completely useless
> at this point.

Probably.

>> If you want to see what is going follow the call trace looks like.
>> pci_enable_msix 
>>   arch_setup_msi_irqs
>>     create_irq_nr
>> 
>> After pci_enable_msix is finished then the driver goes and makes all
>> of the irqs per cpu irqs.
>> 
>> There are goofy things that happen when hardware asks for 1 irq per cpu.
>> But since msi can ask for up to 4096 irqs (assuming the hardware
>> supports it) I can totally see putting all 256 of those irqs on a single
>> cpu, before you go to user space and let user space or something
>> reassign all of those irqs in a per cpu way.
>> 
>> My gut feel says that the real answer is to delay assigning a vector
>> to an irq until request_irq().  At which point we will know that someone
>> at least wants to use the irq.
>
> Right. So the solution would be:
>
> create_irq allocates an irq number + irq descriptor, nothing else
>
> chip->startup() will setup the vector and chip->shutdown releases
> it. That requires to change the return value of chip->startup to int,
> so we can return an error code, but that can be done in course of the
> overhaul I'm working on. 
>
> Right now I prefer not to add more crap to io_apic.c, it's horrible
> enough already. I'll fix that with the cleanup.

Understood.  It has taken a couple of years before this bug finally
bit anyone waiting a release or two to get it fixed properly seems
reasonable.

pci_enable_msix all in it's own way is fixable, but it has
few enough callers < 80 that it is also fixable.

drivers/pci/msi.c and drivers/pci/htirq.c are interesting in
that they are arch independent users of the generiq layer.  Which
is why msi_desc needed a new field.

Eric



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC/PATCHv2] x86/irq: round-robin distribution of irqs to cpus w/in node
  2010-09-28 10:59           ` Eric W. Biederman
@ 2010-09-29 17:19             ` Arthur Kepner
  2010-09-29 18:05               ` Thomas Gleixner
  2010-10-17 10:44             ` Thomas Gleixner
  1 sibling, 1 reply; 11+ messages in thread
From: Arthur Kepner @ 2010-09-29 17:19 UTC (permalink / raw)
  To: Eric W. Biederman; +Cc: Thomas Gleixner, linux-kernel, x86


(Compendium reply to 2 emails.)

On Mon, Sep 27, 2010 at 05:17:07PM -0700, Eric W. Biederman wrote:
> Thomas Gleixner <tglx@linutronix.de> writes:
> 
> > On Mon, 27 Sep 2010, Arthur Kepner wrote:
> >
> ......
> The deep bug is that create_irq_nr allocates a vector (which it does
> because at the time there was no better way to mark an irq in use on
> x86).  In the case of msi-x we really don't know the node that irq is
> going to be used on until we get a request irq.  We simply know which
> node the device is on.
> 
> If you want to see what is going follow the call trace looks like.
> pci_enable_msix 
>   arch_setup_msi_irqs
>     create_irq_nr
> 
> After pci_enable_msix is finished then the driver goes and makes all
> of the irqs per cpu irqs.
> 
> There are goofy things that happen when hardware asks for 1 irq per cpu.
> But since msi can ask for up to 4096 irqs (assuming the hardware
> supports it) I can totally see putting all 256 of those irqs on a single
> cpu, before you go to user space and let user space or something
> reassign all of those irqs in a per cpu way.
> 

Yes, that's exactly the problem. All of the vectors on the lowest 
numbered CPUs get used. Any subsequent request for an interrupt on 
one of the low numbered CPUs will fail.

> .....

On Tue, Sep 28, 2010 at 03:59:33AM -0700, Eric W. Biederman wrote:
> Thomas Gleixner <tglx@linutronix.de> writes:
> 
> > On Mon, 27 Sep 2010, Eric W. Biederman wrote:
> >> > On Mon, 27 Sep 2010, Arthur Kepner wrote:
> >> The deep bug is that create_irq_nr allocates a vector (which it does
> >> because at the time there was no better way to mark an irq in use on
> >> x86).  In the case of msi-x we really don't know the node that irq is
> >> going to be used on until we get a request irq.  We simply know which
> >> node the device is on.
> >
> > Bah. So the whole per node allocation business is completely useless
> > at this point.
> 
> Probably.

Huh? No, the patch that started this thread spreads the irqs around 
and avoids the problem of a single CPU's vectors all being consumed.

> ...
> 
> Understood.  It has taken a couple of years before this bug finally
> bit anyone waiting a release or two to get it fixed properly seems
> reasonable.
> ....

And so what are we to do in the meantime? At the moment we're 
disabling MSIX, which is a pretty unattractive workaround.

-- 
Arthur

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC/PATCHv2] x86/irq: round-robin distribution of irqs to cpus w/in node
  2010-09-29 17:19             ` Arthur Kepner
@ 2010-09-29 18:05               ` Thomas Gleixner
  0 siblings, 0 replies; 11+ messages in thread
From: Thomas Gleixner @ 2010-09-29 18:05 UTC (permalink / raw)
  To: Arthur Kepner; +Cc: Eric W. Biederman, linux-kernel, x86

On Wed, 29 Sep 2010, Arthur Kepner wrote:
> On Mon, Sep 27, 2010 at 05:17:07PM -0700, Eric W. Biederman wrote:
> > Thomas Gleixner <tglx@linutronix.de> writes:
> 
> On Tue, Sep 28, 2010 at 03:59:33AM -0700, Eric W. Biederman wrote:
> > Thomas Gleixner <tglx@linutronix.de> writes:
> > 
> > > On Mon, 27 Sep 2010, Eric W. Biederman wrote:
> > >> > On Mon, 27 Sep 2010, Arthur Kepner wrote:
> > >> The deep bug is that create_irq_nr allocates a vector (which it does
> > >> because at the time there was no better way to mark an irq in use on
> > >> x86).  In the case of msi-x we really don't know the node that irq is
> > >> going to be used on until we get a request irq.  We simply know which
> > >> node the device is on.
> > >
> > > Bah. So the whole per node allocation business is completely useless
> > > at this point.
> > 
> > Probably.
> 
> Huh? No, the patch that started this thread spreads the irqs around 
> and avoids the problem of a single CPU's vectors all being consumed.

I'm talking about the per node memory allocations for sparse. Not the
irq vectors
 
> > 
> > Understood.  It has taken a couple of years before this bug finally
> > bit anyone waiting a release or two to get it fixed properly seems
> > reasonable.
> > ....
> 
> And so what are we to do in the meantime? At the moment we're 
> disabling MSIX, which is a pretty unattractive workaround.

I understand that. The cleanup patches are hopefully ready for the
next kernel, so we can deal with that problem on top of them.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC/PATCHv2] x86/irq: round-robin distribution of irqs to cpus w/in node
  2010-09-28 10:59           ` Eric W. Biederman
  2010-09-29 17:19             ` Arthur Kepner
@ 2010-10-17 10:44             ` Thomas Gleixner
  2010-10-19 23:58               ` Arthur Kepner
  1 sibling, 1 reply; 11+ messages in thread
From: Thomas Gleixner @ 2010-10-17 10:44 UTC (permalink / raw)
  To: Eric W. Biederman; +Cc: Arthur Kepner, LKML, x86, Jesse Barnes

Eric,

On Tue, 28 Sep 2010, Eric W. Biederman wrote:
> Thomas Gleixner <tglx@linutronix.de> writes:
> >> My gut feel says that the real answer is to delay assigning a vector
> >> to an irq until request_irq().  At which point we will know that someone
> >> at least wants to use the irq.

Looked a bit deeper into the users. Quite a bunch do

       pci_enable_msix();
       request_irqs();

in their probe function. There is no sign of making them per cpu or
node. mlx4 is one of them. No sign of anything related to nodes or cpus in
the whole driver.

Even if the driver does not request the irqs from the probe function,
why does it need to do the msi/msix setup in the probe function at
all?

Wouldn't it be sufficient to do that at open() right before the
interrupts are requested.

> > Right. So the solution would be:
> >
> > create_irq allocates an irq number + irq descriptor, nothing else
> >
> > chip->startup() will setup the vector and chip->shutdown releases
> > it. That requires to change the return value of chip->startup to int,
> > so we can return an error code, but that can be done in course of the
> > overhaul I'm working on. 
> >
> > Right now I prefer not to add more crap to io_apic.c, it's horrible
> > enough already. I'll fix that with the cleanup.
> 
> Understood.  It has taken a couple of years before this bug finally
> bit anyone waiting a release or two to get it fixed properly seems
> reasonable.

That needs some real work on drivers :)

> pci_enable_msix all in it's own way is fixable, but it has
> few enough callers < 80 that it is also fixable.

The fundamental flaw of arch_setup_msi_irqs() is 

  node = dev_to_node(&dev->dev);

That's the only node information we get. So we put everything on a
single node.

What we really need is a node entry in struct msi_desc, which gets
assigned from struct msix_entry in msix_setup_entries().

Thoughts ?

	 tglx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC/PATCHv2] x86/irq: round-robin distribution of irqs to cpus w/in node
  2010-10-17 10:44             ` Thomas Gleixner
@ 2010-10-19 23:58               ` Arthur Kepner
  0 siblings, 0 replies; 11+ messages in thread
From: Arthur Kepner @ 2010-10-19 23:58 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Eric W. Biederman, LKML, x86, Jesse Barnes

On Sun, Oct 17, 2010 at 12:44:07PM +0200, Thomas Gleixner wrote:
> ....
> Looked a bit deeper into the users. Quite a bunch do
> 
>        pci_enable_msix();
>        request_irqs();
> 
> in their probe function. There is no sign of making them per cpu or
> node. mlx4 is one of them. No sign of anything related to nodes or cpus in
> the whole driver.
> 
> Even if the driver does not request the irqs from the probe function,
> why does it need to do the msi/msix setup in the probe function at
> all?
> 
> Wouldn't it be sufficient to do that at open() right before the
> interrupts are requested.

That's a good point.

> .....
> The fundamental flaw of arch_setup_msi_irqs() is 
> 
>   node = dev_to_node(&dev->dev);
> 
> That's the only node information we get. So we put everything on a
> single node.
>

But it's even worse than that, because we put everything on a single 
cpu within the node until no vectors are left there, then move on to the 
next one...

-- 
Arthur

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2010-10-19 23:58 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-09-27 20:34 [RFC/PATCHv2] x86/irq: round-robin distribution of irqs to cpus w/in node Arthur Kepner
2010-09-27 20:46 ` Thomas Gleixner
2010-09-27 22:01   ` Arthur Kepner
2010-09-27 22:12     ` Thomas Gleixner
2010-09-28  0:17       ` Eric W. Biederman
2010-09-28  8:08         ` Thomas Gleixner
2010-09-28 10:59           ` Eric W. Biederman
2010-09-29 17:19             ` Arthur Kepner
2010-09-29 18:05               ` Thomas Gleixner
2010-10-17 10:44             ` Thomas Gleixner
2010-10-19 23:58               ` Arthur Kepner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).