linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC] generalise scheduling classes
       [not found]                     ` <3FC01817.3090705@cyberone.com.au>
@ 2003-11-23 11:57                       ` Nick Piggin
  2003-11-23 12:01                         ` Ingo Molnar
                                           ` (3 more replies)
  0 siblings, 4 replies; 29+ messages in thread
From: Nick Piggin @ 2003-11-23 11:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: Martin J. Bligh, Andi Kleen, Ingo Molnar, Andi Kleen,
	Con Kolivas, Andrew Morton, jbarnes, efocht, John Hawkes, wookie

Hi everyone,
We still don't have an HT aware scheduler, which is unfortunate because
weird stuff like that looks like it will only become more common in future.

I made a patch on top of my recent NUMA/SMP scheduling stuff to implement
generalised scheduling classes. With this modification we can allow
architectures to control scheduling policy in a much finer way.
Hyperthreading should be no problem, hierarchical (NUMA) nodes should
be doable as well.

I'm not exactly sure how architecuture specific code is supposed to be
handled, I'll have to have a look at some examples. Basically architectures
build up your own scheduling "classes".

I have supplied a default function to build up the classes if none is
supplied. It builds them so functionality should be similar to the
previous standard local / remote behaviour.

Haven't done much testing yet, just asking for comments. Will these
classes be sufficient for everyone?

Class is struct sched_class in include/linux/sched.h
Default classes are built by arch_init_sched_classes in kernel/sched.c

http://www.kerneltrap.org/~npiggin/w23/
The patch in question is this one
http://www.kerneltrap.org/~npiggin/w23/broken-out/sched-domain.patch

Best regards,
Nick



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC] generalise scheduling classes
  2003-11-23 11:57                       ` [RFC] generalise scheduling classes Nick Piggin
@ 2003-11-23 12:01                         ` Ingo Molnar
  2003-11-23 12:15                           ` Nick Piggin
  2003-11-23 16:26                           ` Martin J. Bligh
  2003-11-23 21:38                         ` [RFC] generalise scheduling classes William Lee Irwin III
                                           ` (2 subsequent siblings)
  3 siblings, 2 replies; 29+ messages in thread
From: Ingo Molnar @ 2003-11-23 12:01 UTC (permalink / raw)
  To: Nick Piggin
  Cc: linux-kernel, Martin J. Bligh, Andi Kleen, Andi Kleen,
	Con Kolivas, Andrew Morton, jbarnes, efocht, John Hawkes, wookie


On Sun, 23 Nov 2003, Nick Piggin wrote:

> We still don't have an HT aware scheduler, [...]

uhm, have you seen my HT scheduler patches, in particular the HT scheduler
in Fedora Core 1, which is on top of a pretty recent 2.6 scheduler? Works
pretty well.

	Ingo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC] generalise scheduling classes
  2003-11-23 12:01                         ` Ingo Molnar
@ 2003-11-23 12:15                           ` Nick Piggin
  2003-11-23 12:21                             ` Ingo Molnar
  2003-11-23 16:26                           ` Martin J. Bligh
  1 sibling, 1 reply; 29+ messages in thread
From: Nick Piggin @ 2003-11-23 12:15 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Martin J. Bligh, Andi Kleen, Andi Kleen,
	Con Kolivas, Andrew Morton, jbarnes, efocht, John Hawkes, wookie



Ingo Molnar wrote:

>On Sun, 23 Nov 2003, Nick Piggin wrote:
>
>
>>We still don't have an HT aware scheduler, [...]
>>
>
>uhm, have you seen my HT scheduler patches, in particular the HT scheduler
>in Fedora Core 1, which is on top of a pretty recent 2.6 scheduler? Works
>pretty well.
>

No I have seen it. Sorry I know you have done so and it looks good. I
wouldn't be adverse to it being included, although Linus seems to be.
The changes I have made nearly give you it for free anyway.

I just meant that there is not one in Linus' tree yet.

Nick



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC] generalise scheduling classes
  2003-11-23 12:15                           ` Nick Piggin
@ 2003-11-23 12:21                             ` Ingo Molnar
  0 siblings, 0 replies; 29+ messages in thread
From: Ingo Molnar @ 2003-11-23 12:21 UTC (permalink / raw)
  To: Nick Piggin
  Cc: linux-kernel, Martin J. Bligh, Andi Kleen, Andi Kleen,
	Con Kolivas, Andrew Morton, jbarnes, efocht, John Hawkes, wookie


On Sun, 23 Nov 2003, Nick Piggin wrote:

> I just meant that there is not one in Linus' tree yet.

yes, because when i wrote it we were already in a feature freeze, and the
changes are intrusive. And being the scheduler maintainer i'm supposed to
show a certain level of self restraint :-)

	Ingo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC] generalise scheduling classes
  2003-11-23 12:01                         ` Ingo Molnar
  2003-11-23 12:15                           ` Nick Piggin
@ 2003-11-23 16:26                           ` Martin J. Bligh
  2003-12-01 10:08                             ` [patch] sched-HT-2.6.0-test11-A5 Ingo Molnar
  1 sibling, 1 reply; 29+ messages in thread
From: Martin J. Bligh @ 2003-11-23 16:26 UTC (permalink / raw)
  To: Ingo Molnar, Nick Piggin
  Cc: linux-kernel, Andi Kleen, Andi Kleen, Con Kolivas, Andrew Morton,
	jbarnes, efocht, John Hawkes, wookie

>> We still don't have an HT aware scheduler, [...]
> 
> uhm, have you seen my HT scheduler patches, in particular the HT scheduler
> in Fedora Core 1, which is on top of a pretty recent 2.6 scheduler? Works
> pretty well.

Do you have a pointer to an updated patch? I haven't seen a version of
that for a while, and would like to play with it.

Thanks,

M.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC] generalise scheduling classes
  2003-11-23 11:57                       ` [RFC] generalise scheduling classes Nick Piggin
  2003-11-23 12:01                         ` Ingo Molnar
@ 2003-11-23 21:38                         ` William Lee Irwin III
  2003-11-24  2:19                           ` Nick Piggin
  2003-11-24  1:06                         ` Anton Blanchard
  2003-11-24 22:48                         ` bill davidsen
  3 siblings, 1 reply; 29+ messages in thread
From: William Lee Irwin III @ 2003-11-23 21:38 UTC (permalink / raw)
  To: Nick Piggin
  Cc: linux-kernel, Martin J. Bligh, Andi Kleen, Ingo Molnar,
	Andi Kleen, Con Kolivas, Andrew Morton, jbarnes, efocht,
	John Hawkes, wookie

On Sun, Nov 23, 2003 at 10:57:54PM +1100, Nick Piggin wrote:
> Class is struct sched_class in include/linux/sched.h
> Default classes are built by arch_init_sched_classes in kernel/sched.c
> http://www.kerneltrap.org/~npiggin/w23/
> The patch in question is this one
> http://www.kerneltrap.org/~npiggin/w23/broken-out/sched-domain.patch

There's a small terminological oddity in that "class" is usually meant
to describe policies governing a task, and "domain" system partitions
like the bits in your patch (I don't recall if they're meant to be
logical or physical). e.g. usage elsewhere would say that there is an
"interactive class", a "timesharing class", a "realtime class", and so
on. Apart from that (and I suppose it's a minor concern), this appears
relatively innocuous.


-- wli

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC] generalise scheduling classes
  2003-11-23 11:57                       ` [RFC] generalise scheduling classes Nick Piggin
  2003-11-23 12:01                         ` Ingo Molnar
  2003-11-23 21:38                         ` [RFC] generalise scheduling classes William Lee Irwin III
@ 2003-11-24  1:06                         ` Anton Blanchard
  2003-11-24  2:26                           ` Nick Piggin
  2003-11-24 22:48                         ` bill davidsen
  3 siblings, 1 reply; 29+ messages in thread
From: Anton Blanchard @ 2003-11-24  1:06 UTC (permalink / raw)
  To: Nick Piggin
  Cc: linux-kernel, Martin J. Bligh, Andi Kleen, Ingo Molnar,
	Andi Kleen, Con Kolivas, Andrew Morton, jbarnes, efocht,
	John Hawkes, wookie


> We still don't have an HT aware scheduler, which is unfortunate because
> weird stuff like that looks like it will only become more common in 
> future.

Yep. Look at POWER5, 2 cores on a die sharing a l2 cache and 2 threads
on each core. On top of that you have the higher level NUMA
characteristics of the machine. So we need SMT as well as (potentially)
2 levels of NUMA. The overhead of enabling multi levels of NUMA may
outweigh the gains, we need to do some analysis.

Looks like a lot of the other architectures are going multi core multi
thread...

(HT is an intel trademark for what boils down to being SMT)

Anton

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC] generalise scheduling classes
  2003-11-23 21:38                         ` [RFC] generalise scheduling classes William Lee Irwin III
@ 2003-11-24  2:19                           ` Nick Piggin
  0 siblings, 0 replies; 29+ messages in thread
From: Nick Piggin @ 2003-11-24  2:19 UTC (permalink / raw)
  To: William Lee Irwin III
  Cc: linux-kernel, Martin J. Bligh, Andi Kleen, Ingo Molnar,
	Andi Kleen, Con Kolivas, Andrew Morton, jbarnes, efocht,
	John Hawkes, wookie



William Lee Irwin III wrote:

>On Sun, Nov 23, 2003 at 10:57:54PM +1100, Nick Piggin wrote:
>
>>Class is struct sched_class in include/linux/sched.h
>>Default classes are built by arch_init_sched_classes in kernel/sched.c
>>http://www.kerneltrap.org/~npiggin/w23/
>>The patch in question is this one
>>http://www.kerneltrap.org/~npiggin/w23/broken-out/sched-domain.patch
>>
>
>There's a small terminological oddity in that "class" is usually meant
>to describe policies governing a task, and "domain" system partitions
>like the bits in your patch (I don't recall if they're meant to be
>logical or physical). e.g. usage elsewhere would say that there is an
>"interactive class", a "timesharing class", a "realtime class", and so
>on. Apart from that (and I suppose it's a minor concern), this appears
>relatively innocuous.
>

Yeah as you see from the name of the patch as well I got a bit muddled.
I think I'd better change it to sched_domain. Good point.



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC] generalise scheduling classes
  2003-11-24  1:06                         ` Anton Blanchard
@ 2003-11-24  2:26                           ` Nick Piggin
  2003-11-24  2:39                             ` Davide Libenzi
  0 siblings, 1 reply; 29+ messages in thread
From: Nick Piggin @ 2003-11-24  2:26 UTC (permalink / raw)
  To: Anton Blanchard
  Cc: linux-kernel, Martin J. Bligh, Andi Kleen, Ingo Molnar,
	Andi Kleen, Con Kolivas, Andrew Morton, jbarnes, efocht,
	John Hawkes, wookie



Anton Blanchard wrote:

>>We still don't have an HT aware scheduler, which is unfortunate because
>>weird stuff like that looks like it will only become more common in 
>>future.
>>
>
>Yep. Look at POWER5, 2 cores on a die sharing a l2 cache and 2 threads
>on each core. On top of that you have the higher level NUMA
>characteristics of the machine. So we need SMT as well as (potentially)
>2 levels of NUMA. The overhead of enabling multi levels of NUMA may
>outweigh the gains, we need to do some analysis.
>

Technically the scheduler knows nothing about NUMA. Previously it had
local and a remote domains corresponding to inter and intra node cpu sets.
All it did was to do remote balancing a little more gently. But we'll call
it NUMA scheduling.

What you want for POWER5 is very aggressive sharing at the SMT level and
possibly even the chip level if they share l2. Less aggressive for node
local and then even less for remote.

SGI I think have differing distances between NUMA nodes and they expressed
possible interest in a multi level system.

I can't give you good benchmark numbers because I only have the NUMAQ at
OSDL to test on - its only got 2 levels anyway. I should think that
overheads are quite minor considering it is in slow paths (balancing).



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC] generalise scheduling classes
  2003-11-24  2:26                           ` Nick Piggin
@ 2003-11-24  2:39                             ` Davide Libenzi
  0 siblings, 0 replies; 29+ messages in thread
From: Davide Libenzi @ 2003-11-24  2:39 UTC (permalink / raw)
  To: Nick Piggin; +Cc: linux-kernel

[Cc list trimmed. There was the whole world]

On Mon, 24 Nov 2003, Nick Piggin wrote:

> Technically the scheduler knows nothing about NUMA. Previously it had
> local and a remote domains corresponding to inter and intra node cpu sets.
> All it did was to do remote balancing a little more gently. But we'll call
> it NUMA scheduling.

One patch I did ages ago was using a topology matrix NxN storing distances 
(read move weights) from each CPU: mat[i][j] == distance/weight i <-> j
At that time the matrix was bolt-in since there was no topology API. maybe 
now can be built a little bit more wisely using HT and NUMA topology info.



- Davide



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC] generalise scheduling classes
  2003-11-23 11:57                       ` [RFC] generalise scheduling classes Nick Piggin
                                           ` (2 preceding siblings ...)
  2003-11-24  1:06                         ` Anton Blanchard
@ 2003-11-24 22:48                         ` bill davidsen
  2003-11-25  1:46                           ` Nick Piggin
  3 siblings, 1 reply; 29+ messages in thread
From: bill davidsen @ 2003-11-24 22:48 UTC (permalink / raw)
  To: linux-kernel

In article <3FC0A0C2.90800@cyberone.com.au>,
Nick Piggin  <piggin@cyberone.com.au> wrote:

| We still don't have an HT aware scheduler, which is unfortunate because
| weird stuff like that looks like it will only become more common in future.

The idea is hardly new, in the late 60's GE (still a mainframe vendor at
that time) was looking at two execution units on a single memory path.
They decided it would have problems with memory bandwidth, what else is
new?


-- 
bill davidsen <davidsen@tmr.com>
  CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC] generalise scheduling classes
  2003-11-24 22:48                         ` bill davidsen
@ 2003-11-25  1:46                           ` Nick Piggin
  2003-11-25 16:23                             ` Bill Davidsen
  0 siblings, 1 reply; 29+ messages in thread
From: Nick Piggin @ 2003-11-25  1:46 UTC (permalink / raw)
  To: bill davidsen; +Cc: linux-kernel



bill davidsen wrote:

>In article <3FC0A0C2.90800@cyberone.com.au>,
>Nick Piggin  <piggin@cyberone.com.au> wrote:
>
>| We still don't have an HT aware scheduler, which is unfortunate because
>| weird stuff like that looks like it will only become more common in future.
>
>The idea is hardly new, in the late 60's GE (still a mainframe vendor at
>that time) was looking at two execution units on a single memory path.
>They decided it would have problems with memory bandwidth, what else is
>new?
>

I don't think I said new, but I guess they (SMT, NUMA, CMP) are newish
for architectures supported by Linux Kernel. OK NUMA has been around for
a while, but the scheduler apparently doesn't work so well for atypical
new NUMAs like Opteron.



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [RFC] generalise scheduling classes
  2003-11-25  1:46                           ` Nick Piggin
@ 2003-11-25 16:23                             ` Bill Davidsen
  0 siblings, 0 replies; 29+ messages in thread
From: Bill Davidsen @ 2003-11-25 16:23 UTC (permalink / raw)
  To: Nick Piggin; +Cc: linux-kernel

On Tue, 25 Nov 2003, Nick Piggin wrote:

> 
> 
> bill davidsen wrote:
> 
> >In article <3FC0A0C2.90800@cyberone.com.au>,
> >Nick Piggin  <piggin@cyberone.com.au> wrote:
> >
> >| We still don't have an HT aware scheduler, which is unfortunate because
> >| weird stuff like that looks like it will only become more common in future.
> >
> >The idea is hardly new, in the late 60's GE (still a mainframe vendor at
> >that time) was looking at two execution units on a single memory path.
> >They decided it would have problems with memory bandwidth, what else is
> >new?
> >
> 
> I don't think I said new, but I guess they (SMT, NUMA, CMP) are newish
> for architectures supported by Linux Kernel. OK NUMA has been around for
> a while, but the scheduler apparently doesn't work so well for atypical
> new NUMAs like Opteron.

You didn't say new, I wasn't correcting you, just thought that the
historical perspective might be interesting. I would love to try the new
scheduler, but my test computer is not pleased with Fedora.

-- 
bill davidsen <davidsen@tmr.com>
  CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [RFC] Further SMP / NUMA scheduler improvements
       [not found]                 ` <3FBF099F.8070403@cyberone.com.au>
       [not found]                   ` <1010800000.1069532100@[10.10.2.4]>
@ 2003-11-30  9:35                   ` Nick Piggin
  1 sibling, 0 replies; 29+ messages in thread
From: Nick Piggin @ 2003-11-30  9:35 UTC (permalink / raw)
  To: Martin J. Bligh
  Cc: Andi Kleen, Ingo Molnar, Andi Kleen, Con Kolivas, Andrew Morton,
	jbarnes, efocht, John Hawkes, wookie, linux-kernel, LSE

http://www.kerneltrap.org/~npiggin/w25/

Been sorting some bugs out. Its pretty stable. Although CONFIG_SMT
is still apparently broken. It will probably stay that way due to not
having an SMP P4 to test with and being a bit busy with other things.

I have done some tweaking of the "domains", and found some pretty
impressive performance improvements. The 16-way NUMAQ at OSDL is
running out of steam now (ie. I've got quite a bit of low hanging
fruit), and I've only got a couple more days with it anyway. So I'd
like other architectures especially to try it if interested.

I could assist in building architecture specific scheduling descriptions
if anyone would like to try it on a non traditional SMP / NUMA, however
I think something might be broken with SMT handling. Probably active
migration. I can't be bothered fixing it unless I can find an SMP P4 HT
to test with :P

(dbench is most significantly improved)

System is dev16-000 at OSDL. total/idle ticks are profiler ticks.
16GB ram, 4x4 nodes NUMAQ
model name      : Pentium III (Katmai)
cpu MHz         : 495.274
cache size      : 512 KB
 
dbench                  8       16      32      64      128
bk19                    470.84  433.47  360.04  351.96  359.86
w22                     477.35  439.82  387.77  378.65  367.07
w25                     473.10  587.74  503.35  532.83  524.61
total/idle ticks
bk19                    3598603/124601
w22                     3425876/227225
w25                     2408853/232176
 
tbench                  8       16      24      32
bk19                    46.17   58.74   60.78   59.79
w22                     47.39   58.72   57.73   57.86
w25                     53.59   58.60   65.80   63.35
total/idle ticks
bk19                    7603448/1115754
w22                     7808203/1897589
w25                     7150680/2038258
                                                                                

kernbench (make -j, 5 runs)     real    user    sys
bk19                            82.231  997.021 152.044
w22                             81.384  973.653 140.246
w25                             80.650  970.900 131.833
total/idle ticks
bk19                            2218739/1442831
w22                             2110820/1398357
w25                             2062930/1393573
                                                                                

                                                                                

hackbench (3 runs)      1       100
bk19                    0.591   37.578
w22                     0.386   31.954
w25                     0.365   33.289
total/idle ticks
bk19                    1948913/319060
w22                     1655610/360777
w25                     1721178/496178
                                                                                

reaim 256               parent time  child stime  child utime  jpm
bk19                    247.33       373.57       3568.12      6396.64
w22                     250.37       332.70       3614.94      6318.97
w25                     258.67       311.85       3684.09      6116.21



^ permalink raw reply	[flat|nested] 29+ messages in thread

* [patch] sched-HT-2.6.0-test11-A5
  2003-11-23 16:26                           ` Martin J. Bligh
@ 2003-12-01 10:08                             ` Ingo Molnar
  2003-12-06 19:01                               ` Martin J. Bligh
  2003-12-08 17:56                               ` William Lee Irwin III
  0 siblings, 2 replies; 29+ messages in thread
From: Ingo Molnar @ 2003-12-01 10:08 UTC (permalink / raw)
  To: Martin J. Bligh; +Cc: linux-kernel


On Sun, 23 Nov 2003, Martin J. Bligh wrote:

> > have you seen my HT scheduler patches, in particular the HT scheduler
> > in Fedora Core 1, which is on top of a pretty recent 2.6 scheduler? Works
> > pretty well.
> 
> Do you have a pointer to an updated patch? I haven't seen a version of
> that for a while, and would like to play with it.

i've uploaded the HT scheduler patch against 2.6.0-test11 to:

    redhat.com/~mingo/O(1)-scheduler/sched-HT-2.6.0-test11-A5

note, the patch includes a fix to sync wakeups, which might hurt lat_ctx.  
I've attached the fix against vanilla 2.6.0-test11 as well.

	Ingo

--- linux/kernel/sched.c.orig	
+++ linux/kernel/sched.c	
@@ -646,7 +646,7 @@ repeat_lock_task:
 				 */
 				p->activated = -1;
 			}
-			if (sync)
+			if (sync && (task_cpu(p) == smp_processor_id()))
 				__activate_task(p, rq);
 			else {
 				activate_task(p, rq);

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [patch] sched-HT-2.6.0-test11-A5
  2003-12-01 10:08                             ` [patch] sched-HT-2.6.0-test11-A5 Ingo Molnar
@ 2003-12-06 19:01                               ` Martin J. Bligh
  2003-12-06 21:40                                 ` Zwane Mwaikambo
  2003-12-08 17:56                               ` William Lee Irwin III
  1 sibling, 1 reply; 29+ messages in thread
From: Martin J. Bligh @ 2003-12-06 19:01 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: linux-kernel


> i've uploaded the HT scheduler patch against 2.6.0-test11 to:
> 
>     redhat.com/~mingo/O(1)-scheduler/sched-HT-2.6.0-test11-A5

Hangs on boot (NUMA-Q) after "Starting migration thread for cpu 0".
Any ideas what that might be?

M.


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [patch] sched-HT-2.6.0-test11-A5
  2003-12-06 19:01                               ` Martin J. Bligh
@ 2003-12-06 21:40                                 ` Zwane Mwaikambo
  2003-12-07 13:34                                   ` Ingo Molnar
  0 siblings, 1 reply; 29+ messages in thread
From: Zwane Mwaikambo @ 2003-12-06 21:40 UTC (permalink / raw)
  To: Martin J. Bligh; +Cc: Ingo Molnar, linux-kernel

On Sat, 6 Dec 2003, Martin J. Bligh wrote:

>
> > i've uploaded the HT scheduler patch against 2.6.0-test11 to:
> >
> >     redhat.com/~mingo/O(1)-scheduler/sched-HT-2.6.0-test11-A5
>
> Hangs on boot (NUMA-Q) after "Starting migration thread for cpu 0".
> Any ideas what that might be?

Ingo here is a patch to fix compilation on larger NR_CPUS, i have also
appended the oops Martin is probably seeing. Currently debugging it.

Index: linux-2.6.0-test11-ht/kernel/sched.c
===================================================================
RCS file: /build/cvsroot/linux-2.6.0-test11/kernel/sched.c,v
retrieving revision 1.1.1.2
diff -u -p -B -r1.1.1.2 sched.c
--- linux-2.6.0-test11-ht/kernel/sched.c	6 Dec 2003 21:21:07 -0000	1.1.1.2
+++ linux-2.6.0-test11-ht/kernel/sched.c	6 Dec 2003 21:22:30 -0000
@@ -266,7 +266,7 @@ static DEFINE_PER_CPU(struct runqueue, r
 #define migration_queue(cpu)	(&cpu_int(cpu)->migration_queue)

 #if NR_CPUS > 1
-# define task_allowed(p, cpu)	((p)->cpus_allowed & (1UL << (cpu)))
+# define task_allowed(p, cpu)	cpu_isset(cpu, (p)->cpus_allowed)
 #else
 # define task_allowed(p, cpu)	1
 #endif

..... CPU clock speed is 398.0715 MHz.
..... host bus clock speed is 99.0678 MHz.
checking TSC synchronization across 2 CPUs: passed.
Starting migration thread for cpu 0
Unable to handle kernel paging request at virtual address f000afae
 printing eip:
c0124608
*pde = 00000000
Oops: 0002 [#1]
CPU:    0
EIP:    0060:[<c0124608>]    Not tainted
EFLAGS: 00010013
EIP is at migration_task+0x158/0x290
eax: 00000001   ebx: c150bbc0   ecx: c150c5e4   edx: f000afae
esi: c1b9ffcc   edi: c1b9e000   ebp: c1b9ffec   esp: c1b9ffc0
ds: 007b   es: 007b   ss: 0068
Process migration/0 (pid: 2, threadinfo=c1b9e000 task=c1bd29b0)
Stack: 00000000 c150bbc0 00000000 c1bbbf9c 00000000 00000063 00000000 00000000
       c01244b0 00000000 00000000 00000000 c0107185 c1bbbf9c 00000000 00000000
Call Trace:
 [<c01244b0>] migration_task+0x0/0x290
 [<c0107185>] kernel_thread_helper+0x5/0x10

Code: 89 02 89 50 04 8b 55 d4 89 12 89 52 04 8b 4d d8 b2 01 81 79


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [patch] sched-HT-2.6.0-test11-A5
  2003-12-06 21:40                                 ` Zwane Mwaikambo
@ 2003-12-07 13:34                                   ` Ingo Molnar
  2003-12-07 16:39                                     ` Anton Blanchard
  0 siblings, 1 reply; 29+ messages in thread
From: Ingo Molnar @ 2003-12-07 13:34 UTC (permalink / raw)
  To: Zwane Mwaikambo; +Cc: Martin J. Bligh, linux-kernel


On Sat, 6 Dec 2003, Zwane Mwaikambo wrote:

> Ingo here is a patch to fix compilation on larger NR_CPUS, [...]

thanks.

> [...] i have also appended the oops Martin is probably seeing. Currently
> debugging it.

i've seen a similar crash once on a 2-way (4-way) HT box, so there some
startup race going on most likely.

	Ingo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [patch] sched-HT-2.6.0-test11-A5
  2003-12-07 13:34                                   ` Ingo Molnar
@ 2003-12-07 16:39                                     ` Anton Blanchard
  2003-12-07 17:16                                       ` Martin J. Bligh
  2003-12-07 17:22                                       ` Anton Blanchard
  0 siblings, 2 replies; 29+ messages in thread
From: Anton Blanchard @ 2003-12-07 16:39 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Zwane Mwaikambo, Martin J. Bligh, linux-kernel

 
Hi,

> i've seen a similar crash once on a 2-way (4-way) HT box, so there some
> startup race going on most likely.

Im seeing bootup crashes every now and then on a ppc64 box too. A few
other things Ive noticed:

- nr_running looks to be wrong. On an idle machine just after booting:

00:07:20 up 14 min,  3 users,  load average: 8.00, 7.67, 4.95

Its a 4 core 8 thread machine, so perhaps we are counting idle threads.

- The printk had me confused, we are really mapping cpu2 onto cpu1s runqueue.
Patch below.

- I tried the HT scheduler with NUMA enabled. Same machine, 4 core 8
threads, each NUMA node has 2 cores, 4 threads. Its easy to end up in a sub
optimal state:

 Cpu0 :   0.0% user,   0.0% system,   0.0% nice, 100.0% idle,   0.0% IO-wait
 Cpu1 :   0.0% user,   0.0% system,   0.0% nice, 100.0% idle,   0.0% IO-wait
 Cpu2 : 100.0% user,   0.0% system,   0.0% nice,   0.0% idle,   0.0% IO-wait
 Cpu3 : 100.0% user,   0.0% system,   0.0% nice,   0.0% idle,   0.0% IO-wait

 Cpu4 : 100.0% user,   0.0% system,   0.0% nice,   0.0% idle,   0.0% IO-wait
 Cpu5 :   0.0% user,   0.0% system,   0.0% nice, 100.0% idle,   0.0% IO-wait
 Cpu6 : 100.0% user,   0.0% system,   0.0% nice,   0.0% idle,   0.0% IO-wait
 Cpu7 :   0.7% user,   0.7% system,   0.0% nice,  98.6% idle,   0.0% IO-wait

cpu0/1 are an SMT pair, cpu 0-3 are a NUMA node. As you can see cpu0/1
is free and cpu2/3 is busy on both threads. So far we have noticed
nr_cpus_node should probably be nr_runqueues_node now, otherwise the 
inter node balancing code could make bad decisions. However in this case
the imbalance is within the node, so Im not sure why cpu0/1 runqueue
hasnt stolen a task from cpu2/3.

Anton

--- foo/kernel/sched.c.ff	2003-12-03 02:03:41.000000000 -0600
+++ foo/kernel/sched.c	2003-12-04 11:37:40.980022085 -0600
@@ -1452,7 +1452,7 @@
 	runqueue_t *rq2 = cpu_rq(cpu2);
 	int cpu2_idx_orig = cpu_idx(cpu2), cpu2_idx;
 
-	printk("mapping CPU#%d's runqueue to CPU#%d's runqueue.\n", cpu1, cpu2);
+	printk("mapping CPU#%d's runqueue to CPU#%d's runqueue.\n", cpu2, cpu1);
 	BUG_ON(rq1 == rq2 || rq2->nr_running || rq_idx(cpu1) != cpu1);
 	/*
 	 * At this point, we dont have anything in the runqueue yet. So,

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [patch] sched-HT-2.6.0-test11-A5
  2003-12-07 16:39                                     ` Anton Blanchard
@ 2003-12-07 17:16                                       ` Martin J. Bligh
  2003-12-07 18:31                                         ` Zwane Mwaikambo
  2003-12-07 20:17                                         ` Anton Blanchard
  2003-12-07 17:22                                       ` Anton Blanchard
  1 sibling, 2 replies; 29+ messages in thread
From: Martin J. Bligh @ 2003-12-07 17:16 UTC (permalink / raw)
  To: Anton Blanchard, Ingo Molnar; +Cc: Zwane Mwaikambo, linux-kernel

>> i've seen a similar crash once on a 2-way (4-way) HT box, so there some
>> startup race going on most likely.
> 
> Im seeing bootup crashes every now and then on a ppc64 box too. A few
> other things Ive noticed:

ALT+sysrq+t does nothing, but NMI watchdog gives me:

-----------------------------------------

Starting migration thread for cpu 0
NMI Watchdog detected LOCKUP on CPU0, eip c011c11b, registers:
CPU:    0
EIP:    0060:[<c011c11b>]    Not tainted
EFLAGS: 00000086
EIP is at .text.lock.sched+0xee/0x243
eax: 0000000c   ebx: 00000286   ecx: f018a000   edx: c3932bc0
esi: 0000000c   edi: c3932bc0   ebp: f018bfb4   esp: f018bfac
ds: 007b   es: 007b   ss: 0068
Process migration/0 (pid: 2, threadinfo=f018a000 task=f018f330)
Stack: 00000000 00000000 f018bfec c011befe 02000000 00000020 c011bd54 00000000 
       00000000 f018f330 c0309c60 c0309c60 f018a000 f018a000 00000000 00000063 
       00000000 c0107001 f01a3fac 00000000 00000000 
Call Trace:
 [<c011befe>] migration_task+0x1aa/0x1b4
 [<c011bd54>] migration_task+0x0/0x1b4
 [<c0107001>] kernel_thread_helper+0x5/0xc

Code: 7e f8 e9 44 e6 ff ff f3 90 80 7e 04 00 7e f8 e9 6b e6 ff ff 
console shuts up ...

---------------------------------------------

 [<c011befe>] migration_task+0x1aa/0x1b4

is just after the return from complete, so I'd say we're deadlocked
on "spin_lock_irqsave(&x->wait.lock, flags);" in complete. Afraid I 
don't understand what the completion / migration stuff is attempting 
to do, so can't be more help ... I can reproduce this 100% of the 
time if you want something tried though.

M.



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [patch] sched-HT-2.6.0-test11-A5
  2003-12-07 16:39                                     ` Anton Blanchard
  2003-12-07 17:16                                       ` Martin J. Bligh
@ 2003-12-07 17:22                                       ` Anton Blanchard
  1 sibling, 0 replies; 29+ messages in thread
From: Anton Blanchard @ 2003-12-07 17:22 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Zwane Mwaikambo, Martin J. Bligh, linux-kernel

 
> - I tried the HT scheduler with NUMA enabled. Same machine, 4 core 8
> threads, each NUMA node has 2 cores, 4 threads. Its easy to end up in a sub
> optimal state:

I just managed to get it into the same state with NUMA disabled:

 Cpu0 :   0.3% user,   0.0% system,   0.0% nice,  99.7% idle,   0.0% IO-wait
 Cpu1 : 100.0% user,   0.0% system,   0.0% nice,   0.0% idle,   0.0% IO-wait
 Cpu2 :   0.0% user,   0.0% system,   0.0% nice, 100.0% idle,   0.0% IO-wait
 Cpu3 : 100.0% user,   0.0% system,   0.0% nice,   0.0% idle,   0.0% IO-wait

 Cpu4 : 100.0% user,   0.0% system,   0.0% nice,   0.0% idle,   0.0% IO-wait
 Cpu5 : 100.0% user,   0.0% system,   0.0% nice,   0.0% idle,   0.0% IO-wait
 Cpu6 :   0.0% user,   0.0% system,   0.0% nice, 100.0% idle,   0.0% IO-wait
 Cpu7 :   0.0% user,   0.0% system,   0.0% nice, 100.0% idle,   0.0% IO-wait

Anton

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [patch] sched-HT-2.6.0-test11-A5
  2003-12-07 17:16                                       ` Martin J. Bligh
@ 2003-12-07 18:31                                         ` Zwane Mwaikambo
  2003-12-07 20:17                                         ` Anton Blanchard
  1 sibling, 0 replies; 29+ messages in thread
From: Zwane Mwaikambo @ 2003-12-07 18:31 UTC (permalink / raw)
  To: Martin J. Bligh; +Cc: Anton Blanchard, Ingo Molnar, linux-kernel

On Sun, 7 Dec 2003, Martin J. Bligh wrote:

> >> i've seen a similar crash once on a 2-way (4-way) HT box, so there some
> >> startup race going on most likely.
> >
> > Im seeing bootup crashes every now and then on a ppc64 box too. A few
> > other things Ive noticed:

Just a datapoint, the migration_queue list appears to be getting
'corrupted' the ->next pointer is NULL on entry to migration_task.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [patch] sched-HT-2.6.0-test11-A5
  2003-12-07 17:16                                       ` Martin J. Bligh
  2003-12-07 18:31                                         ` Zwane Mwaikambo
@ 2003-12-07 20:17                                         ` Anton Blanchard
  2003-12-08 17:57                                           ` Ingo Molnar
  1 sibling, 1 reply; 29+ messages in thread
From: Anton Blanchard @ 2003-12-07 20:17 UTC (permalink / raw)
  To: Martin J. Bligh; +Cc: Ingo Molnar, Zwane Mwaikambo, linux-kernel


> > Im seeing bootup crashes every now and then on a ppc64 box too. A few
> > other things Ive noticed:
> 
> ALT+sysrq+t does nothing, but NMI watchdog gives me:

I seem to be seeing 2 different problems, the first is where we are
running on the cpu we are about to remap:

running on cpu 1
mapping CPU#1's runqueue to CPU#0's runqueue.
kernel BUG in sched_map_runqueue at kernel/sched.c:1460!

ie:

BUG_ON(rq1 == rq2 || rq2->nr_running || rq_idx(cpu1) != cpu1);
                     ^^^

We should bounce ourselves off cpu2 before merging the runqueues.

Anton

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [patch] sched-HT-2.6.0-test11-A5
  2003-12-01 10:08                             ` [patch] sched-HT-2.6.0-test11-A5 Ingo Molnar
  2003-12-06 19:01                               ` Martin J. Bligh
@ 2003-12-08 17:56                               ` William Lee Irwin III
  2003-12-08 18:21                                 ` Ingo Molnar
  2003-12-08 19:36                                 ` William Lee Irwin III
  1 sibling, 2 replies; 29+ messages in thread
From: William Lee Irwin III @ 2003-12-08 17:56 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: linux-kernel

On Mon, Dec 01, 2003 at 11:08:17AM +0100, Ingo Molnar wrote:
> i've uploaded the HT scheduler patch against 2.6.0-test11 to:
>     redhat.com/~mingo/O(1)-scheduler/sched-HT-2.6.0-test11-A5
> note, the patch includes a fix to sync wakeups, which might hurt lat_ctx.  
> I've attached the fix against vanilla 2.6.0-test11 as well.

This appears to either leak migration threads or not set
rq->cpu[x].migration_thread basically ever for x > 0. Or if they
are shut down, how? Also, what makes sure cpu_idx is initialized
before they wake? They'll all spin on cpu_rq(0)->lock, no?

Furthermore, sched_map_runqueue() is performed after all the idle
threads are running and all the notifiers have kicked the migration
threads, but does no locking whatsoever.

Also, does init_idle() need to move into rest_init()? It should be
equivalent to its current placement.

Why not per_cpu for __rq_idx[] and __cpu_idx[]? This would have the
advantage of residing on node-local memory for sane architectures
(and perhaps in the future, some insane ones).

-- wli

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [patch] sched-HT-2.6.0-test11-A5
  2003-12-07 20:17                                         ` Anton Blanchard
@ 2003-12-08 17:57                                           ` Ingo Molnar
  0 siblings, 0 replies; 29+ messages in thread
From: Ingo Molnar @ 2003-12-08 17:57 UTC (permalink / raw)
  To: Anton Blanchard; +Cc: Martin J. Bligh, Zwane Mwaikambo, linux-kernel


On Mon, 8 Dec 2003, Anton Blanchard wrote:

> running on cpu 1
> mapping CPU#1's runqueue to CPU#0's runqueue.
> kernel BUG in sched_map_runqueue at kernel/sched.c:1460!
> 
> ie:
> 
> BUG_ON(rq1 == rq2 || rq2->nr_running || rq_idx(cpu1) != cpu1);
>                      ^^^
> 
> We should bounce ourselves off cpu2 before merging the runqueues.

hm, a bad assumption about where the boot code runs. Could you try to just
do something like this prior that BUG_ON():

	set_cpus_allowed(current, cpumask_of_cpu(cpu1));

does this fix the crash?

	Ingo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [patch] sched-HT-2.6.0-test11-A5
  2003-12-08 17:56                               ` William Lee Irwin III
@ 2003-12-08 18:21                                 ` Ingo Molnar
  2003-12-08 19:12                                   ` William Lee Irwin III
  2003-12-08 22:20                                   ` age
  2003-12-08 19:36                                 ` William Lee Irwin III
  1 sibling, 2 replies; 29+ messages in thread
From: Ingo Molnar @ 2003-12-08 18:21 UTC (permalink / raw)
  To: William Lee Irwin III; +Cc: linux-kernel, Anton Blanchard


On Mon, 8 Dec 2003, William Lee Irwin III wrote:

> This appears to either leak migration threads or not set
> rq->cpu[x].migration_thread basically ever for x > 0. Or if they are
> shut down, how? Also, what makes sure cpu_idx is initialized before they
> wake? They'll all spin on cpu_rq(0)->lock, no?

yep, it just leaks migration threads. Not a big problem right now, but for
hotplug CPU support this needs to be fixed.

> Furthermore, sched_map_runqueue() is performed after all the idle
> threads are running and all the notifiers have kicked the migration
> threads, but does no locking whatsoever.

yep - at this point nothing else is really supposed to run but you are
right it must be locked properly.

> Also, does init_idle() need to move into rest_init()? It should be
> equivalent to its current placement.

this is a leftover of a change that went into 2.6 already. I've removed
this change.

> Why not per_cpu for __rq_idx[] and __cpu_idx[]? This would have the
> advantage of residing on node-local memory for sane architectures (and
> perhaps in the future, some insane ones).

agreed, i've changed them to be per-cpu.

new patch with all your suggestions included is at:

  redhat.com/~mingo/O(1)-scheduler/sched-SMT-2.6.0-test11-C1

it also includes the bounce-to-cpu1 fix from/for Anton.

	Ingo

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [patch] sched-HT-2.6.0-test11-A5
  2003-12-08 18:21                                 ` Ingo Molnar
@ 2003-12-08 19:12                                   ` William Lee Irwin III
  2003-12-08 22:20                                   ` age
  1 sibling, 0 replies; 29+ messages in thread
From: William Lee Irwin III @ 2003-12-08 19:12 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: linux-kernel, Anton Blanchard

On Mon, 8 Dec 2003, William Lee Irwin III wrote:
>> Why not per_cpu for __rq_idx[] and __cpu_idx[]? This would have the
>> advantage of residing on node-local memory for sane architectures (and
>> perhaps in the future, some insane ones).

On Mon, Dec 08, 2003 at 07:21:14PM +0100, Ingo Molnar wrote:
> agreed, i've changed them to be per-cpu.
> new patch with all your suggestions included is at:
>   redhat.com/~mingo/O(1)-scheduler/sched-SMT-2.6.0-test11-C1
> it also includes the bounce-to-cpu1 fix from/for Anton.

This looks pretty good, thanks.


-- wli

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [patch] sched-HT-2.6.0-test11-A5
  2003-12-08 17:56                               ` William Lee Irwin III
  2003-12-08 18:21                                 ` Ingo Molnar
@ 2003-12-08 19:36                                 ` William Lee Irwin III
  1 sibling, 0 replies; 29+ messages in thread
From: William Lee Irwin III @ 2003-12-08 19:36 UTC (permalink / raw)
  To: Ingo Molnar, linux-kernel

On Mon, Dec 08, 2003 at 09:56:22AM -0800, William Lee Irwin III wrote:
> Furthermore, sched_map_runqueue() is performed after all the idle
> threads are running and all the notifiers have kicked the migration
> threads, but does no locking whatsoever.

Not quite true for migration threads; they're kicked off smp_init(),
called strictly after smp_prepare_cpus(), so all's well with them.
The idle threads shouldn't enter schedule() either, since
start_secondary() spins until smp_init() sets the smp_commenced_mask
bits in cpu_up().

So the important parts of all that were unfortunately all wrong. I'll
look again.


-- wli

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [patch] sched-HT-2.6.0-test11-A5
  2003-12-08 18:21                                 ` Ingo Molnar
  2003-12-08 19:12                                   ` William Lee Irwin III
@ 2003-12-08 22:20                                   ` age
  1 sibling, 0 replies; 29+ messages in thread
From: age @ 2003-12-08 22:20 UTC (permalink / raw)
  To: linux-kernel

Ingo Molnar wrote:

> 
> On Mon, 8 Dec 2003, William Lee Irwin III wrote:
> 
>> This appears to either leak migration threads or not set
>> rq->cpu[x].migration_thread basically ever for x > 0. Or if they are
>> shut down, how? Also, what makes sure cpu_idx is initialized before they
>> wake? They'll all spin on cpu_rq(0)->lock, no?
> 
> yep, it just leaks migration threads. Not a big problem right now, but for
> hotplug CPU support this needs to be fixed.
> 
>> Furthermore, sched_map_runqueue() is performed after all the idle
>> threads are running and all the notifiers have kicked the migration
>> threads, but does no locking whatsoever.
> 
> yep - at this point nothing else is really supposed to run but you are
> right it must be locked properly.
> 
>> Also, does init_idle() need to move into rest_init()? It should be
>> equivalent to its current placement.
> 
> this is a leftover of a change that went into 2.6 already. I've removed
> this change.
> 
>> Why not per_cpu for __rq_idx[] and __cpu_idx[]? This would have the
>> advantage of residing on node-local memory for sane architectures (and
>> perhaps in the future, some insane ones).
> 
> agreed, i've changed them to be per-cpu.
> 
> new patch with all your suggestions included is at:
> 
>   redhat.com/~mingo/O(1)-scheduler/sched-SMT-2.6.0-test11-C1
> 
> it also includes the bounce-to-cpu1 fix from/for Anton.
> 
> Ingo
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/


Hi Mingo

The same trouble:

kernel/built-in.o(.text+0x34f): In function `try_to_wake_up':
: undefined reference to `wake_up_cpu'
kernel/built-in.o(.text+0x4ad): In function `wake_up_forked_process':
: undefined reference to `set_task_cpu'
kernel/built-in.o(.text+0xea1): In function `schedule':
: undefined reference to `set_task_cpu'
kernel/built-in.o(.text+0x1203): In function `schedule':
: undefined reference to `active_load_balance'
kernel/built-in.o(.init.text+0xaa): In function `init_idle':
: undefined reference to `set_task_cpu'
kernel/built-in.o(.init.text+0x1d6): In function `sched_init':
: undefined reference to `set_task_cpu'
make: *** [.tmp_vmlinux1] Error 1

groetjes,

Age Huisman


^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2003-12-08 22:12 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20031117021511.GA5682@averell>
     [not found] ` <3FB83790.3060003@cyberone.com.au>
     [not found]   ` <20031117141548.GB1770@colin2.muc.de>
     [not found]     ` <Pine.LNX.4.56.0311171638140.29083@earth>
     [not found]       ` <20031118173607.GA88556@colin2.muc.de>
     [not found]         ` <Pine.LNX.4.56.0311181846360.23128@earth>
     [not found]           ` <20031118235710.GA10075@colin2.muc.de>
     [not found]             ` <3FBAF84B.3050203@cyberone.com.au>
     [not found]               ` <501330000.1069443756@flay>
     [not found]                 ` <3FBF099F.8070403@cyberone.com.au>
     [not found]                   ` <1010800000.1069532100@[10.10.2.4]>
     [not found]                     ` <3FC01817.3090705@cyberone.com.au>
2003-11-23 11:57                       ` [RFC] generalise scheduling classes Nick Piggin
2003-11-23 12:01                         ` Ingo Molnar
2003-11-23 12:15                           ` Nick Piggin
2003-11-23 12:21                             ` Ingo Molnar
2003-11-23 16:26                           ` Martin J. Bligh
2003-12-01 10:08                             ` [patch] sched-HT-2.6.0-test11-A5 Ingo Molnar
2003-12-06 19:01                               ` Martin J. Bligh
2003-12-06 21:40                                 ` Zwane Mwaikambo
2003-12-07 13:34                                   ` Ingo Molnar
2003-12-07 16:39                                     ` Anton Blanchard
2003-12-07 17:16                                       ` Martin J. Bligh
2003-12-07 18:31                                         ` Zwane Mwaikambo
2003-12-07 20:17                                         ` Anton Blanchard
2003-12-08 17:57                                           ` Ingo Molnar
2003-12-07 17:22                                       ` Anton Blanchard
2003-12-08 17:56                               ` William Lee Irwin III
2003-12-08 18:21                                 ` Ingo Molnar
2003-12-08 19:12                                   ` William Lee Irwin III
2003-12-08 22:20                                   ` age
2003-12-08 19:36                                 ` William Lee Irwin III
2003-11-23 21:38                         ` [RFC] generalise scheduling classes William Lee Irwin III
2003-11-24  2:19                           ` Nick Piggin
2003-11-24  1:06                         ` Anton Blanchard
2003-11-24  2:26                           ` Nick Piggin
2003-11-24  2:39                             ` Davide Libenzi
2003-11-24 22:48                         ` bill davidsen
2003-11-25  1:46                           ` Nick Piggin
2003-11-25 16:23                             ` Bill Davidsen
2003-11-30  9:35                   ` [RFC] Further SMP / NUMA scheduler improvements Nick Piggin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).