From: Andrew Theurer <habanero@us.ibm.com>
To: Nick Piggin <piggin@cyberone.com.au>
Cc: Bill Davidsen <davidsen@tmr.com>,
"Martin J. Bligh" <mbligh@aracnet.com>,
Erich Focht <efocht@hpce.nec.com>,
linux-kernel <linux-kernel@vger.kernel.org>,
LSE <lse-tech@lists.sourceforge.net>, Andi Kleen <ak@muc.de>,
torvalds@osdl.org, mingo@elte.hu
Subject: Re: [Lse-tech] Re: [patch] scheduler fix for 1cpu/node case
Date: Sat, 23 Aug 2003 09:32:24 -0500 [thread overview]
Message-ID: <200308230932.24832.habanero@us.ibm.com> (raw)
In-Reply-To: <3F46B561.7060706@cyberone.com.au>
> >AMD is 1 because there's no need to balance within a node, so I want the
> >inter-node balance frequency to be as often as it was with just O(1).
> > This interval would not work well with other NUMA boxes, so that's the
> > main reason to have arch specific intervals.
>
> OK, I misread the patch. IIRC AMD has 1 CPU per node? If so, why doesn't
> this simply prevent balancing within a node?
Yes, one cpu/node. Oh, it does prevent it, but with the current intervals, we
end up not really balancing as often (since we need a inter-node balance),
and when we call load_balance in schedule when idle, we don't balance at all
since it's only a node local balance.
> > And, as a general guideline, boxes with
> >different local-remote latency ratios will probably benefit from different
> >inter-node balance intervals. I don't know what these ratios are, but I'd
> >like the kernel to have the ability to change for one arch and not affect
> >another.
>
> I fully appreciate there are huge differences... I am curious if
> you can see much improvements in practice.
I think AMD would be the first good test. Maybe Andi has some results on
numasched vs O(1), which would be a good indication.
next prev parent reply other threads:[~2003-08-23 16:57 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-07-28 19:16 [patch] scheduler fix for 1cpu/node case Erich Focht
2003-07-28 19:55 ` Martin J. Bligh
2003-07-28 20:18 ` Erich Focht
2003-07-28 20:37 ` Martin J. Bligh
2003-07-29 2:24 ` Andrew Theurer
2003-07-29 10:08 ` Erich Focht
2003-07-29 13:33 ` [Lse-tech] " Andrew Theurer
2003-07-30 15:23 ` Erich Focht
2003-07-30 15:44 ` Andrew Theurer
2003-07-29 14:27 ` Martin J. Bligh
2003-08-13 20:49 ` Bill Davidsen
2003-08-22 15:46 ` [Lse-tech] " Andrew Theurer
2003-08-22 22:56 ` Nick Piggin
2003-08-23 0:12 ` Andrew Theurer
2003-08-23 0:29 ` Nick Piggin
2003-08-23 0:47 ` William Lee Irwin III
2003-08-23 8:48 ` Nick Piggin
2003-08-23 14:32 ` Andrew Theurer [this message]
2003-08-23 1:31 ` Martin J. Bligh
2003-07-29 10:08 ` Erich Focht
2003-07-29 14:41 ` Andi Kleen
2003-07-31 15:05 ` Martin J. Bligh
2003-07-31 21:45 ` Erich Focht
2003-08-01 0:26 ` Martin J. Bligh
2003-08-01 16:30 ` [Lse-tech] " Erich Focht
2003-07-29 14:06 Mala Anand
2003-07-29 14:29 ` Martin J. Bligh
2003-07-29 16:04 Mala Anand
2003-07-30 16:34 Luck, Tony
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200308230932.24832.habanero@us.ibm.com \
--to=habanero@us.ibm.com \
--cc=ak@muc.de \
--cc=davidsen@tmr.com \
--cc=efocht@hpce.nec.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lse-tech@lists.sourceforge.net \
--cc=mbligh@aracnet.com \
--cc=mingo@elte.hu \
--cc=piggin@cyberone.com.au \
--cc=torvalds@osdl.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).