From: Jeff Garzik <jgarzik@pobox.com>
To: "Kamble, Nitin A" <nitin.a.kamble@intel.com>
Cc: Andrew Morton <akpm@digeo.com>,
linux-kernel@vger.kernel.org, kai.bankett@ontika.net,
mingo@redhat.com, "Nakajima, Jun" <jun.nakajima@intel.com>,
"Mallick, Asit K" <asit.k.mallick@intel.com>,
"Saxena, Sunil" <sunil.saxena@intel.com>
Subject: Re: [PATCH][IO_APIC] 2.5.63bk7 irq_balance improvments / bug-fixes
Date: Tue, 04 Mar 2003 23:38:11 -0500 [thread overview]
Message-ID: <3E657F33.4000304@pobox.com> (raw)
In-Reply-To: <E88224AA79D2744187E7854CA8D9131DA8B7E0@fmsmsx407.fm.intel.com>
Kamble, Nitin A wrote:
> There are few issues we found with the user level daemon approach.
Thanks much for the response!
> Static binding compatibility: With the user level daemon, users can
> not
> use the /proc/irq/i/smp_affinity interface for the static binding of
> interrupts.
Not terribly accurate: in "one-shot" mode, where the daemon balances
irqs once at startup, users can change smp_affinity all they want.
In the normal continuous-balance mode, it is quite easy to have the
daemon either (a) notice changes users make or (b) configure the daemon.
The daemon does not do (a) or (b) currently, but it is a simple change.
> There is some information which is only available in the kernel today,
> Also the future implementation might need more kernel data. This is
> important for interfaces such as NAPI, where interrupts handling changes
> on the fly.
This depends on the information :) Some information that is useful for
balancing is only [easily] available from userspace. In-kernel
information may be easily exported through "sysfs", which is designed to
export in-kernel information.
Further, for NAPI and networking in general, it is recommended to bind
each NIC to a single interrupt, and never change that binding.
Delivering a single NIC's interrupts to multiple CPUs leads to a
noticeable performance loss. This is why some people complain that
their specific network setups are faster on a uniprocessor kernel than
an SMP kernel.
I have not examined interrupt delivery for other peripherals, such at
ATA or SCSI hosts, but for networking you definitely want to statically
bind each NIC's irqs to a separate CPU, and then not touch that binding.
Best regards, and thanks again for your valuable feedback,
Jeff
next prev parent reply other threads:[~2003-03-05 4:28 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-03-05 4:21 [PATCH][IO_APIC] 2.5.63bk7 irq_balance improvments / bug-fixes Kamble, Nitin A
2003-03-05 4:38 ` Jeff Garzik [this message]
2003-03-05 15:46 ` Jason Lunz
2003-03-05 18:26 ` Arjan van de Ven
-- strict thread matches above, loose matches on Subject: below --
2003-03-06 20:01 Nakajima, Jun
2003-03-05 19:57 Kamble, Nitin A
2003-03-04 23:33 Kamble, Nitin A
2003-03-04 23:51 ` Andrew Morton
2003-03-05 10:48 ` Kai Bankett
2003-03-04 16:33 Kai Bankett
2003-03-04 16:45 ` Jeff Garzik
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3E657F33.4000304@pobox.com \
--to=jgarzik@pobox.com \
--cc=akpm@digeo.com \
--cc=asit.k.mallick@intel.com \
--cc=jun.nakajima@intel.com \
--cc=kai.bankett@ontika.net \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=nitin.a.kamble@intel.com \
--cc=sunil.saxena@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).