From: Andi Kleen <ak@muc.de>
To: Adrian Bunk <bunk@fs.tum.de>
Cc: Andi Kleen <ak@muc.de>,
torvalds@transmeta.com, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] Support worst case cache line sizes as config option
Date: Tue, 29 Apr 2003 00:00:00 +0200 [thread overview]
Message-ID: <20030428220000.GA9152@averell> (raw)
In-Reply-To: <20030428121920.GE27064@fs.tum.de>
On Mon, Apr 28, 2003 at 02:19:20PM +0200, Adrian Bunk wrote:
> Your approach as well as the approach I'm currently working on breaks
> the current semantics that a plain M386 produces a kernel that runs on
> CPUs.
You seem to be completely confused about what the patch is doing.
CONFIG_X86_GENERIC does not break anything.
A kernel compiled with it will still run fine on the 386 (or whatever
the main CPU selection was). All it does is to make the worst case
of the kernel running on a CPU with larger cache sizes that your "main" CPU
not as bad.
In fact if you read my patchkits in the last days they are all aimed
at making kernels run on more CPUs, not less. The eventual
goal is to make Athlon kernels run on all 686+ class CPUs, and P4 kernels run
on all 686+ CPUs, and 386 kernels run well on all CPUs without bad
performance penalties.
M386 is a quite bad example here anyways. The standard situation
is that people compile an SMP kernel for the P2 (which seems to be "the generic cpu"
these days[1]). That kernel is compiled with a cache size of 32bytes.
This 32byte cache size is used to avoid false sharing in a lot of data structures;
e.g. arrays of per CPU data are usually padded to cache line size to make
sure each CPU has its own cache line in the array. This unfortunately does
not work when you run it on a CPU with a bigger cache line; like an Athlon
with 64byte cache line or an P4 with 128 byte cache. In this case a lot
of performance will be lost on SMP because multiple CPUs will fight
for the data on a single cache line ("false sharing"). Always padding
to the worst case cache line size avoids this problem.
The issue is not an SMP only problem. Some device drivers already use
the cache line size to optimize PCI bus performance, and they have penalties
when the data is incorrectly padded.
Increasing the cache line size costs a bit of memory for more padding,
but overall the overhead is quite reasonable.
As far Jamies point: if you don't want your 386 kernel to be optimized
for the worst case just don't enable the X86_GENERIC option.
-Andi
[1] ignoring K6 and C3, which are too poor to have CMOV.
prev parent reply other threads:[~2003-04-28 21:48 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-04-27 2:23 [PATCH] Support worst case cache line sizes as config option Andi Kleen
[not found] ` <BKEGKPICNAKILKJKMHCAAEPNCIAA.Riley@Williams.Name>
2003-04-27 12:52 ` Andi Kleen
2003-04-28 9:16 ` Adrian Bunk
2003-04-28 11:47 ` Andi Kleen
2003-04-28 12:19 ` Adrian Bunk
2003-04-28 12:48 ` Jamie Lokier
2003-04-28 22:00 ` Andi Kleen [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20030428220000.GA9152@averell \
--to=ak@muc.de \
--cc=bunk@fs.tum.de \
--cc=linux-kernel@vger.kernel.org \
--cc=torvalds@transmeta.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).