* [BENCHMARK] 2.5.43-mm2 with contest
@ 2002-10-17 7:40 Con Kolivas
2002-10-17 8:01 ` Andrew Morton
0 siblings, 1 reply; 3+ messages in thread
From: Con Kolivas @ 2002-10-17 7:40 UTC (permalink / raw)
To: linux kernel mailing list; +Cc: Andrew Morton
Here are the updated benchmarks with contest v0.51 (http://contest.kolivas.net)
showing the change from -mm1 to -mm2. Other results removed for clarity.
noload:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.4.18 [3] 71.8 93 0 0 1.01
2.5.43 [2] 74.6 92 0 0 1.04
2.5.43-mm1 [4] 74.9 93 0 0 1.05
2.5.43-mm2 [2] 73.4 93 0 0 1.03
Interesting. This was significant. The slow start that occurs with noload after
a memory flush seems to have been tamed somewhat.
process_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.43 [2] 99.7 71 44 31 1.40
2.5.43-mm1 [5] 100.4 73 37 28 1.41
2.5.43-mm2 [2] 105.8 71 44 31 1.48
One pathological run removed from -mm1 and 3 removed from -mm2. Don't know why
it's getting stuck doing process_load now and the other 2.5 kernels only do it
at bigger data sizes for process_load. 2.4 doesnt seem to exhibit this at all.
ctar_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.43 [1] 97.6 79 1 7 1.37
2.5.43-mm1 [3] 94.6 81 1 6 1.32
2.5.43-mm2 [1] 92.3 82 1 5 1.29
xtar_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.43 [1] 114.9 67 1 7 1.61
2.5.43-mm1 [3] 221.2 46 3 7 3.10
2.5.43-mm2 [2] 171.0 45 2 8 2.39
Improvement
io_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.43 [1] 578.9 13 45 12 8.11
2.5.43-mm1 [3] 383.0 21 27 11 5.36
2.5.43-mm2 [2] 301.1 26 21 11 4.22
Improvement
read_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.43 [3] 117.3 64 6 3 1.64
2.5.43-mm1 [3] 104.4 74 7 4 1.46
2.5.43-mm2 [1] 105.7 73 6 4 1.48
list_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.43 [2] 93.0 76 1 18 1.30
2.5.43-mm1 [3] 97.3 73 0 19 1.36
2.5.43-mm2 [1] 98.9 72 1 23 1.39
mem_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.43 [1] 102.0 75 28 2 1.43
2.5.43-mm1 [3] 104.4 71 27 2 1.46
2.5.43-mm2 [2] 106.5 69 27 2 1.49
Removal of per-cpu pages patch does not seem to have been detrimental to contest
benchmarks at least - perhaps this is responsible for the noload being better now?
Con
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [BENCHMARK] 2.5.43-mm2 with contest
2002-10-17 7:40 [BENCHMARK] 2.5.43-mm2 with contest Con Kolivas
@ 2002-10-17 8:01 ` Andrew Morton
2002-10-17 10:56 ` Con Kolivas
0 siblings, 1 reply; 3+ messages in thread
From: Andrew Morton @ 2002-10-17 8:01 UTC (permalink / raw)
To: Con Kolivas; +Cc: linux kernel mailing list
Con Kolivas wrote:
>
> Here are the updated benchmarks with contest v0.51 (http://contest.kolivas.net)
> showing the change from -mm1 to -mm2. Other results removed for clarity.
>
> noload:
> Kernel [runs] Time CPU% Loads LCPU% Ratio
> 2.4.18 [3] 71.8 93 0 0 1.01
> 2.5.43 [2] 74.6 92 0 0 1.04
> 2.5.43-mm1 [4] 74.9 93 0 0 1.05
> 2.5.43-mm2 [2] 73.4 93 0 0 1.03
Would be interesting to run
blockdev --setra 1024 /dev/hdXX
here. We're getting more idle time with 2.5 and that can only
be due to disk wait - the IO scheduler changes. This might make a
small difference.
> ...
> Removal of per-cpu pages patch does not seem to have been detrimental to contest
> benchmarks at least - perhaps this is responsible for the noload being better now?
Well that code is still there. I'd expect a very small benefit from it
in this testing.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [BENCHMARK] 2.5.43-mm2 with contest
2002-10-17 8:01 ` Andrew Morton
@ 2002-10-17 10:56 ` Con Kolivas
0 siblings, 0 replies; 3+ messages in thread
From: Con Kolivas @ 2002-10-17 10:56 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux kernel mailing list
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On Thursday 17 Oct 2002 6:01 pm, Andrew Morton wrote:
> Con Kolivas wrote:
> > Here are the updated benchmarks with contest v0.51
> > (http://contest.kolivas.net) showing the change from -mm1 to -mm2. Other
> > results removed for clarity.
> >
> > noload:
> > Kernel [runs] Time CPU% Loads LCPU% Ratio
> > 2.4.18 [3] 71.8 93 0 0 1.01
> > 2.5.43 [2] 74.6 92 0 0 1.04
> > 2.5.43-mm1 [4] 74.9 93 0 0 1.05
> > 2.5.43-mm2 [2] 73.4 93 0 0 1.03
>
> Would be interesting to run
>
> blockdev --setra 1024 /dev/hdXX
>
> here. We're getting more idle time with 2.5 and that can only
> be due to disk wait - the IO scheduler changes. This might make a
> small difference.
Well that isn't it (b is with ra 1024):
2.5.43-mm2 [2] 73.4 93 0 0 1.03
2.5.43-mm2b [3] 76.4 94 0 0 1.07
>
> > ...
> > Removal of per-cpu pages patch does not seem to have been detrimental to
> > contest benchmarks at least - perhaps this is responsible for the noload
> > being better now?
>
> Well that code is still there. I'd expect a very small benefit from it
> in this testing.
Sorry. Misunderstood your announce message.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.7 (GNU/Linux)
iD8DBQE9rpdrF6dfvkL3i1gRAu5AAJ9B3LJ3kuplNHdhJGsW785CJ2i4GgCfRs9W
d2BW4cSQaanL/FjJTu3gU9k=
=+Foh
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2002-10-17 10:52 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-10-17 7:40 [BENCHMARK] 2.5.43-mm2 with contest Con Kolivas
2002-10-17 8:01 ` Andrew Morton
2002-10-17 10:56 ` Con Kolivas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).