linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: "Chris Friesen" <cfriesen@nortel.com>
To: David Miller <davem@davemloft.net>
Cc: linuxppc-dev@ozlabs.org, kevdig@hypersurf.com
Subject: Re: [PATCH] genirq: Set initial default irq affinity to just CPU0
Date: Mon, 27 Oct 2008 13:10:55 -0600	[thread overview]
Message-ID: <4906123F.7020802@nortel.com> (raw)
In-Reply-To: <20081027.112823.178324048.davem@davemloft.net>

David Miller wrote:
> From: "Chris Friesen" <cfriesen@nortel.com>

>> What about something like the Cavium Octeon, where we have 16 cores but a
>> single core isn't powerful enough to keep up with a gigE device?
> 
> Hello, we either have hardware that does flow seperation and has multiple
> RX queues going to multiple MSI-X interrupts or we do flow seperation in
> software (work in progress patches were posted for that about a month ago,
> maybe something final will land in 2.6.29)

Are there any plans for a mechanism to allow the kernel to figure out (or be 
told) what packets cpu-affined tasks are interested in and route the 
interrupts appropriately?

> Just moving the interrupt around when not doing flow seperation is as 
> suboptimal as you can possibly get.  You'll get out of order packet 
> processing within the same flow, TCP will retransmit when the reordering
> gets deep enough, and then you're totally screwed performance wise.

Ideally I agree with you.  In this particular case however the hardware is 
capable of doing flow separation, but the vendor driver doesn't support it 
(and isn't in mainline).  Packet rates are high enough that a single core 
cannot keep up, but are low enough that they can be handled by multiple cores 
without reordering if interrupt mitigation is not used.

It's not an ideal situation, but we're sort of stuck unless we do custom 
driver work.

Chris

  reply	other threads:[~2008-10-27 19:11 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-10-24 15:57 [PATCH] genirq: Set initial default irq affinity to just CPU0 Kumar Gala
2008-10-24 23:18 ` David Miller
2008-10-25 21:33   ` Benjamin Herrenschmidt
2008-10-25 22:53     ` Kevin Diggs
2008-10-26  4:05       ` David Miller
2008-10-27 17:36         ` Chris Friesen
2008-10-27 18:28           ` David Miller
2008-10-27 19:10             ` Chris Friesen [this message]
2008-10-27 19:25               ` David Miller
2008-10-28  3:46                 ` Chris Friesen
2008-10-27 19:43             ` Kumar Gala
2008-10-27 19:49               ` David Miller
2008-10-27 20:46                 ` Kumar Gala
2008-10-26  6:48       ` Benjamin Herrenschmidt
2008-10-26  7:16         ` David Miller
2008-10-26  8:29           ` Benjamin Herrenschmidt
2008-10-27  2:30         ` Kevin Diggs
2008-10-27  2:49           ` Benjamin Herrenschmidt
2008-10-26  4:04     ` David Miller
2008-10-26  6:33       ` Benjamin Herrenschmidt
2008-10-27 13:43         ` Kumar Gala
2008-10-27 20:27           ` Benjamin Herrenschmidt
2008-10-27 20:45             ` Kumar Gala

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4906123F.7020802@nortel.com \
    --to=cfriesen@nortel.com \
    --cc=davem@davemloft.net \
    --cc=kevdig@hypersurf.com \
    --cc=linuxppc-dev@ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).