* [BENCHMARK] 2.5.63-mm2 + i/o schedulers with contest
@ 2003-03-04 2:54 Con Kolivas
2003-03-04 4:18 ` Nick Piggin
2003-03-04 8:10 ` Andrew Morton
0 siblings, 2 replies; 8+ messages in thread
From: Con Kolivas @ 2003-03-04 2:54 UTC (permalink / raw)
To: linux-kernel; +Cc: Andrew Morton
Here are contest (http://contest.kolivas.org) benchmarks using the osdl
hardware (http://www.osdl.org) for 2.5.63-mm2 and various i/o schedulers:
2.5.63-mm2: anticipatory scheduler (AS)
2.5.63-mm2cfq: complete fair queueing scheduler (CFQ)
2.5.63-mm2dl: deadline scheduler (DL)
no_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.63 4 79 94.9 0.0 0.0 1.00
2.5.63-mm2cfq 3 79 93.7 0.0 0.0 1.00
2.5.63-mm2 3 80 92.5 0.0 0.0 1.00
2.5.63-mm2dl 3 79 93.7 0.0 0.0 1.00
cacherun:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.63 4 76 97.4 0.0 0.0 0.96
2.5.63-mm2cfq 3 75 98.7 0.0 0.0 0.95
2.5.63-mm2 3 75 98.7 0.0 0.0 0.94
2.5.63-mm2dl 3 75 98.7 0.0 0.0 0.95
process_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.63 4 92 81.5 28.2 15.2 1.16
2.5.63-mm2cfq 3 92 80.4 27.7 16.3 1.16
2.5.63-mm2 3 92 80.4 29.3 16.3 1.15
2.5.63-mm2dl 3 92 80.4 28.3 16.3 1.16
ctar_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.63 3 99 79.8 1.0 4.0 1.25
2.5.63-mm2cfq 3 102 76.5 0.0 0.0 1.29
2.5.63-mm2 3 112 70.5 1.0 6.2 1.40
2.5.63-mm2dl 3 103 75.7 0.0 0.0 1.30
xtar_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.63 3 102 74.5 1.0 3.9 1.29
2.5.63-mm2cfq 3 106 71.7 1.0 3.8 1.34
2.5.63-mm2 3 108 70.4 1.0 4.6 1.35
2.5.63-mm2dl 3 105 72.4 1.0 3.8 1.33
io_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.63 5 217 35.0 56.7 15.1 2.75
2.5.63-mm2cfq 3 218 34.9 50.3 12.8 2.76
2.5.63-mm2 3 99 75.8 15.1 7.1 1.24
2.5.63-mm2dl 3 168 44.6 39.6 13.1 2.13
io_other:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.63 4 95 78.9 15.3 8.3 1.20
2.5.63-mm2cfq 3 93 80.6 14.7 7.5 1.18
2.5.63-mm2 3 92 80.4 13.2 6.5 1.15
2.5.63-mm2dl 3 96 78.1 15.3 7.3 1.22
read_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.63 3 106 74.5 5.7 4.7 1.34
2.5.63-mm2cfq 3 112 68.8 6.8 5.4 1.42
2.5.63-mm2 3 121 64.5 8.4 5.8 1.51
2.5.63-mm2dl 3 107 72.9 6.2 4.7 1.35
list_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.63 3 96 79.2 0.0 6.2 1.22
2.5.63-mm2cfq 3 97 79.4 0.0 6.2 1.23
2.5.63-mm2 3 99 76.8 0.0 6.1 1.24
2.5.63-mm2dl 3 98 78.6 0.0 6.1 1.24
mem_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.63 3 104 75.0 57.7 1.9 1.32
2.5.63-mm2cfq 3 101 76.2 52.3 2.0 1.28
2.5.63-mm2 3 132 59.1 90.3 2.3 1.65
2.5.63-mm2dl 3 100 79.0 52.0 2.0 1.27
dbench_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.63 4 194 39.2 2.0 38.7 2.46
2.5.63-mm2cfq 3 269 28.3 3.7 37.2 3.41
2.5.63-mm2 3 236 32.2 2.7 43.2 2.95
2.5.63-mm2dl 3 207 36.7 2.0 36.2 2.62
It seems the AS scheduler reliably takes slightly longer to compile the kernel
in no load conditions, but only about 1% cpu.
CFQ and DL faster to compile the kernel than AS while extracting or creating
tars.
AS significantly faster under writing large file to the same disk (io_load) or
other disk (io_other) conditions. The CFQ and DL schedulers showed much more
variability on io_load during testing but did not drop below 140 seconds.
CFQ and DL scheduler were faster compiling the kernel under read_load,
list_load and dbench_load.
Mem_load result of AS being slower was just plain weird with the result rising
from 100 to 150 during testing.
Con
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BENCHMARK] 2.5.63-mm2 + i/o schedulers with contest
2003-03-04 2:54 [BENCHMARK] 2.5.63-mm2 + i/o schedulers with contest Con Kolivas
@ 2003-03-04 4:18 ` Nick Piggin
2003-03-04 5:15 ` Con Kolivas
2003-03-04 8:10 ` Andrew Morton
1 sibling, 1 reply; 8+ messages in thread
From: Nick Piggin @ 2003-03-04 4:18 UTC (permalink / raw)
To: Con Kolivas; +Cc: linux-kernel, Andrew Morton
Con Kolivas wrote:
>Here are contest (http://contest.kolivas.org) benchmarks using the osdl
>hardware (http://www.osdl.org) for 2.5.63-mm2 and various i/o schedulers:
>
Thanks :)
>It seems the AS scheduler reliably takes slightly longer to compile the kernel
>in no load conditions, but only about 1% cpu.
>
It is likely that AS will wait too long for gcc to submit another
read and end up timing out anyway. Hopefully IO history tracking
will fix this up - for some loads the effect can be much worse.
>
>
>CFQ and DL faster to compile the kernel than AS while extracting or creating
>tars.
>
This is likely to be balancing differences from LCPU% it does
seem like AS is doing a bit more "load" work.
>
>
>AS significantly faster under writing large file to the same disk (io_load) or
>other disk (io_other) conditions. The CFQ and DL schedulers showed much more
>variability on io_load during testing but did not drop below 140 seconds.
>
small randomish reads vs large writes _is_ where AS really can
perform better than non a non AS scheduler. Unfortunately gcc
doesn't have the _best_ IO pattern for AS ;)
>
>
>CFQ and DL scheduler were faster compiling the kernel under read_load,
>list_load and dbench_load.
>
>Mem_load result of AS being slower was just plain weird with the result rising
>from 100 to 150 during testing.
>
I would like to see if AS helps much with a swap/memory
thrashing load.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BENCHMARK] 2.5.63-mm2 + i/o schedulers with contest
2003-03-04 4:18 ` Nick Piggin
@ 2003-03-04 5:15 ` Con Kolivas
2003-03-04 5:26 ` Nick Piggin
0 siblings, 1 reply; 8+ messages in thread
From: Con Kolivas @ 2003-03-04 5:15 UTC (permalink / raw)
To: Nick Piggin; +Cc: linux-kernel, Andrew Morton
On Tue, 4 Mar 2003 03:18 pm, Nick Piggin wrote:
> Con Kolivas wrote:
> >Here are contest (http://contest.kolivas.org) benchmarks using the osdl
> >hardware (http://www.osdl.org) for 2.5.63-mm2 and various i/o schedulers:
>
> Thanks :)
>
> >It seems the AS scheduler reliably takes slightly longer to compile the
> > kernel in no load conditions, but only about 1% cpu.
>
> It is likely that AS will wait too long for gcc to submit another
> read and end up timing out anyway. Hopefully IO history tracking
> will fix this up - for some loads the effect can be much worse.
>
> >CFQ and DL faster to compile the kernel than AS while extracting or
> > creating tars.
>
> This is likely to be balancing differences from LCPU% it does
> seem like AS is doing a bit more "load" work.
>
> >AS significantly faster under writing large file to the same disk
> > (io_load) or other disk (io_other) conditions. The CFQ and DL schedulers
> > showed much more variability on io_load during testing but did not drop
> > below 140 seconds.
>
> small randomish reads vs large writes _is_ where AS really can
> perform better than non a non AS scheduler. Unfortunately gcc
> doesn't have the _best_ IO pattern for AS ;)
Yes I recall this discussion against a gcc based benchmark. However it is
interesting that it still performed by far the best.
> >CFQ and DL scheduler were faster compiling the kernel under read_load,
> >list_load and dbench_load.
> >
> >Mem_load result of AS being slower was just plain weird with the result
> > rising from 100 to 150 during testing.
>
> I would like to see if AS helps much with a swap/memory
> thrashing load.
That's what mem_load is. It repeatedly tries to access 110% of available ram.
quote from original post:
mem_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.63 3 104 75.0 57.7 1.9 1.32
2.5.63-mm2cfq 3 101 76.2 52.3 2.0 1.28
2.5.63-mm2 3 132 59.1 90.3 2.3 1.65
2.5.63-mm2dl 3 100 79.0 52.0 2.0 1.27
Note that mm2 with AS performed equivalent to the other schedulers but on
later runs took longer. (99, 148,150) This is usually suspicious of a memory
leak that contest is unusually sensitive at picking up, but there wasn't
anything suspicious about the meminfo after these runs, and none of the other
loads changed over time. io_load usually shows drastic prolongation when
memory is leaking.
Con
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BENCHMARK] 2.5.63-mm2 + i/o schedulers with contest
2003-03-04 5:15 ` Con Kolivas
@ 2003-03-04 5:26 ` Nick Piggin
2003-03-04 5:29 ` Con Kolivas
0 siblings, 1 reply; 8+ messages in thread
From: Nick Piggin @ 2003-03-04 5:26 UTC (permalink / raw)
To: Con Kolivas; +Cc: linux-kernel, Andrew Morton
Con Kolivas wrote:
>On Tue, 4 Mar 2003 03:18 pm, Nick Piggin wrote:
>
>>small randomish reads vs large writes _is_ where AS really can
>>perform better than non a non AS scheduler. Unfortunately gcc
>>doesn't have the _best_ IO pattern for AS ;)
>>
>
>Yes I recall this discussion against a gcc based benchmark. However it is
>interesting that it still performed by far the best.
>
Yes, AS obviously does help gcc against io_load. My
"unfortunately" comment was just a pun, of course we
don't want to just test where AS does well.
>>>CFQ and DL scheduler were faster compiling the kernel under read_load,
>>>list_load and dbench_load.
>>>
>>>Mem_load result of AS being slower was just plain weird with the result
>>>rising from 100 to 150 during testing.
>>>
>>I would like to see if AS helps much with a swap/memory
>>thrashing load.
>>
>
>That's what mem_load is. It repeatedly tries to access 110% of available ram.
>quote from original post:
>mem_load:
>Kernel [runs] Time CPU% Loads LCPU% Ratio
>2.5.63 3 104 75.0 57.7 1.9 1.32
>2.5.63-mm2cfq 3 101 76.2 52.3 2.0 1.28
>2.5.63-mm2 3 132 59.1 90.3 2.3 1.65
>2.5.63-mm2dl 3 100 79.0 52.0 2.0 1.27
>
>Note that mm2 with AS performed equivalent to the other schedulers but on
>later runs took longer. (99, 148,150) This is usually suspicious of a memory
>leak that contest is unusually sensitive at picking up, but there wasn't
>anything suspicious about the meminfo after these runs, and none of the other
>loads changed over time. io_load usually shows drastic prolongation when
>memory is leaking.
>
Ah ok. And this change didn't affect other schedulers on mm2? Is
it reproducable with AS? I'll have to keep this in mind and take
another look at it after a few othe bugs are fixed.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BENCHMARK] 2.5.63-mm2 + i/o schedulers with contest
2003-03-04 5:26 ` Nick Piggin
@ 2003-03-04 5:29 ` Con Kolivas
0 siblings, 0 replies; 8+ messages in thread
From: Con Kolivas @ 2003-03-04 5:29 UTC (permalink / raw)
To: Nick Piggin; +Cc: linux-kernel, Andrew Morton
On Tue, 4 Mar 2003 04:26 pm, Nick Piggin wrote:
> Con Kolivas wrote:
> >On Tue, 4 Mar 2003 03:18 pm, Nick Piggin wrote:
> >>small randomish reads vs large writes _is_ where AS really can
> >>perform better than non a non AS scheduler. Unfortunately gcc
> >>doesn't have the _best_ IO pattern for AS ;)
> >
> >Yes I recall this discussion against a gcc based benchmark. However it is
> >interesting that it still performed by far the best.
>
> Yes, AS obviously does help gcc against io_load. My
> "unfortunately" comment was just a pun, of course we
> don't want to just test where AS does well.
>
> >>>CFQ and DL scheduler were faster compiling the kernel under read_load,
> >>>list_load and dbench_load.
> >>>
> >>>Mem_load result of AS being slower was just plain weird with the result
> >>>rising from 100 to 150 during testing.
> >>
> >>I would like to see if AS helps much with a swap/memory
> >>thrashing load.
> >
> >That's what mem_load is. It repeatedly tries to access 110% of available
> > ram. quote from original post:
> >mem_load:
> >Kernel [runs] Time CPU% Loads LCPU% Ratio
> >2.5.63 3 104 75.0 57.7 1.9 1.32
> >2.5.63-mm2cfq 3 101 76.2 52.3 2.0 1.28
> >2.5.63-mm2 3 132 59.1 90.3 2.3 1.65
> >2.5.63-mm2dl 3 100 79.0 52.0 2.0 1.27
> >
> >Note that mm2 with AS performed equivalent to the other schedulers but on
> >later runs took longer. (99, 148,150) This is usually suspicious of a
> > memory leak that contest is unusually sensitive at picking up, but there
> > wasn't anything suspicious about the meminfo after these runs, and none
> > of the other loads changed over time. io_load usually shows drastic
> > prolongation when memory is leaking.
>
> Ah ok. And this change didn't affect other schedulers on mm2? Is
> it reproducable with AS? I'll have to keep this in mind and take
> another look at it after a few othe bugs are fixed.
Not on the other schedulers, no. I'll throw some more benchmarks at it to see
if it recurs. I didn't think much of it at the time.
Con
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BENCHMARK] 2.5.63-mm2 + i/o schedulers with contest
2003-03-04 2:54 [BENCHMARK] 2.5.63-mm2 + i/o schedulers with contest Con Kolivas
2003-03-04 4:18 ` Nick Piggin
@ 2003-03-04 8:10 ` Andrew Morton
2003-03-04 8:20 ` Con Kolivas
2003-03-05 6:02 ` Con Kolivas
1 sibling, 2 replies; 8+ messages in thread
From: Andrew Morton @ 2003-03-04 8:10 UTC (permalink / raw)
To: Con Kolivas; +Cc: linux-kernel
Con Kolivas <kernel@kolivas.org> wrote:
>
> Mem_load result of AS being slower was just plain weird with the result rising
> from 100 to 150 during testing.
>
Maybe we should just swap computers or something?
Finished compiling kernel: elapsed: 145 user: 180 system: 18
Finished mem_load: elapsed: 146 user: 0 system: 2 loads: 5000
Finished compiling kernel: elapsed: 135 user: 181 system: 17
Finished mem_load: elapsed: 136 user: 0 system: 2 loads: 4800
Finished compiling kernel: elapsed: 129 user: 181 system: 17
Finished mem_load: elapsed: 130 user: 0 system: 2 loads: 4800
256MB, dual CPU, ext3/IDE.
Whereas 2.5.63+bk gives:
Finished compiling kernel: elapsed: 131 user: 182 system: 17
Finished mem_load: elapsed: 131 user: 0 system: 1 loads: 4900
Finished compiling kernel: elapsed: 135 user: 182 system: 17
Finished mem_load: elapsed: 135 user: 0 system: 1 loads: 4800
Finished compiling kernel: elapsed: 129 user: 182 system: 17
Finished mem_load: elapsed: 129 user: 0 system: 1 loads: 4600
Conceivably swap fragmentation, but unlikely. Is it still doing a swapoff
between runs?
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BENCHMARK] 2.5.63-mm2 + i/o schedulers with contest
2003-03-04 8:10 ` Andrew Morton
@ 2003-03-04 8:20 ` Con Kolivas
2003-03-05 6:02 ` Con Kolivas
1 sibling, 0 replies; 8+ messages in thread
From: Con Kolivas @ 2003-03-04 8:20 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel
On Tue, 4 Mar 2003 07:10 pm, Andrew Morton wrote:
> Con Kolivas <kernel@kolivas.org> wrote:
> > Mem_load result of AS being slower was just plain weird with the result
> > rising from 100 to 150 during testing.
>
> Maybe we should just swap computers or something?
Well bugger me all I can do is report the results I get, and I filter them a
_lot_. I don't want to lead you up the garden path any more than you want to
travel it.
>
> Finished compiling kernel: elapsed: 145 user: 180 system: 18
> Finished mem_load: elapsed: 146 user: 0 system: 2 loads: 5000
> Finished compiling kernel: elapsed: 135 user: 181 system: 17
> Finished mem_load: elapsed: 136 user: 0 system: 2 loads: 4800
> Finished compiling kernel: elapsed: 129 user: 181 system: 17
> Finished mem_load: elapsed: 130 user: 0 system: 2 loads: 4800
>
> 256MB, dual CPU, ext3/IDE.
>
> Whereas 2.5.63+bk gives:
>
> Finished compiling kernel: elapsed: 131 user: 182 system: 17
> Finished mem_load: elapsed: 131 user: 0 system: 1 loads: 4900
> Finished compiling kernel: elapsed: 135 user: 182 system: 17
> Finished mem_load: elapsed: 135 user: 0 system: 1 loads: 4800
> Finished compiling kernel: elapsed: 129 user: 182 system: 17
> Finished mem_load: elapsed: 129 user: 0 system: 1 loads: 4600
>
> Conceivably swap fragmentation, but unlikely. Is it still doing a swapoff
> between runs?
Yes it should be doing it.
Con
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BENCHMARK] 2.5.63-mm2 + i/o schedulers with contest
2003-03-04 8:10 ` Andrew Morton
2003-03-04 8:20 ` Con Kolivas
@ 2003-03-05 6:02 ` Con Kolivas
1 sibling, 0 replies; 8+ messages in thread
From: Con Kolivas @ 2003-03-05 6:02 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel
On Tue, 4 Mar 2003 07:10 pm, Andrew Morton wrote:
> Con Kolivas <kernel@kolivas.org> wrote:
> > Mem_load result of AS being slower was just plain weird with the result
> > rising from 100 to 150 during testing.
>
> Maybe we should just swap computers or something?
>
> Finished compiling kernel: elapsed: 145 user: 180 system: 18
> Finished mem_load: elapsed: 146 user: 0 system: 2 loads: 5000
> Finished compiling kernel: elapsed: 135 user: 181 system: 17
> Finished mem_load: elapsed: 136 user: 0 system: 2 loads: 4800
> Finished compiling kernel: elapsed: 129 user: 181 system: 17
> Finished mem_load: elapsed: 130 user: 0 system: 2 loads: 4800
>
> 256MB, dual CPU, ext3/IDE.
Tried again - these were done as part of a full contest run, not just mem_load
by itself, but these were the mem_load results:
98
128
135
then it oopsed (the one I posted earlier) and wasn't really usable after that
point. Perhaps they're related. The mystery remains. I'll see what happens
next mm release.
Con
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2003-03-05 5:51 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-03-04 2:54 [BENCHMARK] 2.5.63-mm2 + i/o schedulers with contest Con Kolivas
2003-03-04 4:18 ` Nick Piggin
2003-03-04 5:15 ` Con Kolivas
2003-03-04 5:26 ` Nick Piggin
2003-03-04 5:29 ` Con Kolivas
2003-03-04 8:10 ` Andrew Morton
2003-03-04 8:20 ` Con Kolivas
2003-03-05 6:02 ` Con Kolivas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).