* [BENCHMARK] 2.5.60-cfq with contest
@ 2003-02-11 10:55 Con Kolivas
2003-02-11 10:59 ` Jens Axboe
0 siblings, 1 reply; 5+ messages in thread
From: Con Kolivas @ 2003-02-11 10:55 UTC (permalink / raw)
To: linux kernel mailing list; +Cc: Jens Axboe
Here's a quick set of the relevant results with the untuned CFQ by Jens:
ctar_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 99 78.8 1.0 4.0 1.25
2.5.60-cfq 1 101 78.2 1.0 4.0 1.28
xtar_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 101 76.2 1.0 5.0 1.28
2.5.60-cfq 1 115 66.1 1.0 4.3 1.46
io_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 139 54.7 29.0 12.1 1.76
2.5.60-cfq 1 606 12.7 212.0 21.6 7.67
io_other:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 90 83.3 10.8 5.5 1.14
2.5.60-cfq 1 89 84.3 10.8 5.6 1.13
read_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 103 74.8 6.2 6.8 1.30
2.5.60-cfq 1 103 76.7 6.9 5.8 1.30
list_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 95 80.0 0.0 6.3 1.20
2.5.60-cfq 1 95 81.1 0.0 6.3 1.20
Write based loads hurt. No breakages, but needs tuning.
Con
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [BENCHMARK] 2.5.60-cfq with contest
2003-02-11 10:55 [BENCHMARK] 2.5.60-cfq with contest Con Kolivas
@ 2003-02-11 10:59 ` Jens Axboe
2003-02-11 13:37 ` Jens Axboe
0 siblings, 1 reply; 5+ messages in thread
From: Jens Axboe @ 2003-02-11 10:59 UTC (permalink / raw)
To: Con Kolivas; +Cc: linux kernel mailing list
On Tue, Feb 11 2003, Con Kolivas wrote:
> Here's a quick set of the relevant results with the untuned CFQ by Jens:
Thanks!
> ctar_load:
> Kernel [runs] Time CPU% Loads LCPU% Ratio
> 2.5.60 2 99 78.8 1.0 4.0 1.25
> 2.5.60-cfq 1 101 78.2 1.0 4.0 1.28
> xtar_load:
> Kernel [runs] Time CPU% Loads LCPU% Ratio
> 2.5.60 2 101 76.2 1.0 5.0 1.28
> 2.5.60-cfq 1 115 66.1 1.0 4.3 1.46
> io_load:
> Kernel [runs] Time CPU% Loads LCPU% Ratio
> 2.5.60 2 139 54.7 29.0 12.1 1.76
> 2.5.60-cfq 1 606 12.7 212.0 21.6 7.67
> io_other:
> Kernel [runs] Time CPU% Loads LCPU% Ratio
> 2.5.60 2 90 83.3 10.8 5.5 1.14
> 2.5.60-cfq 1 89 84.3 10.8 5.6 1.13
> read_load:
> Kernel [runs] Time CPU% Loads LCPU% Ratio
> 2.5.60 2 103 74.8 6.2 6.8 1.30
> 2.5.60-cfq 1 103 76.7 6.9 5.8 1.30
> list_load:
> Kernel [runs] Time CPU% Loads LCPU% Ratio
> 2.5.60 2 95 80.0 0.0 6.3 1.20
> 2.5.60-cfq 1 95 81.1 0.0 6.3 1.20
>
> Write based loads hurt. No breakages, but needs tuning.
That's not even as bad as I had feared. I'll try to do some tuning with
contest locally.
--
Jens Axboe
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [BENCHMARK] 2.5.60-cfq with contest
2003-02-11 10:59 ` Jens Axboe
@ 2003-02-11 13:37 ` Jens Axboe
2003-02-11 14:49 ` Jens Axboe
0 siblings, 1 reply; 5+ messages in thread
From: Jens Axboe @ 2003-02-11 13:37 UTC (permalink / raw)
To: Con Kolivas; +Cc: linux kernel mailing list
On Tue, Feb 11 2003, Jens Axboe wrote:
> > Write based loads hurt. No breakages, but needs tuning.
>
> That's not even as bad as I had feared. I'll try to do some tuning with
> contest locally.
Here are my results, for 2.5.60 vanilla, 2.5.60 + cfq with default
quantum of 16 (what you tested, too), and 2.5.60 + cfq without quantum
setting. The latter should be the fairest, only moves one request from
the pending queues.
no_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 31 177.4 0 0.0 1.00
2.5.60-cfq0 2 31 174.2 0 0.0 1.00
2.5.60-cfq16 2 31 177.4 0 0.0 1.00
cacherun:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 29 182.8 0 0.0 0.94
2.5.60-cfq0 2 28 192.9 0 0.0 0.90
2.5.60-cfq16 2 29 182.8 0 0.0 0.94
process_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 38 142.1 12 47.4 1.23
2.5.60-cfq0 2 41 129.3 16 61.0 1.32
2.5.60-cfq16 2 37 145.9 12 43.2 1.19
ctar_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 38 147.4 0 0.0 1.23
2.5.60-cfq0 2 36 155.6 0 0.0 1.16
2.5.60-cfq16 2 36 155.6 0 0.0 1.16
xtar_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 40 140.0 0 2.5 1.29
2.5.60-cfq0 2 37 148.6 0 2.7 1.19
2.5.60-cfq16 2 40 137.5 0 2.5 1.29
io_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 93 61.3 2 14.0 3.00
2.5.60-cfq0 4 103 54.4 2 12.6 3.32
2.5.60-cfq16 2 264 21.6 12 19.9 8.52
read_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 40 140.0 0 5.0 1.29
2.5.60-cfq0 2 39 143.6 0 5.1 1.26
2.5.60-cfq16 2 40 140.0 0 5.0 1.29
list_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 35 157.1 0 8.6 1.13
2.5.60-cfq0 2 35 160.0 0 8.6 1.13
2.5.60-cfq16 2 35 160.0 0 14.3 1.13
mem_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 50 116.0 75 10.0 1.61
2.5.60-cfq0 2 57 101.8 78 8.8 1.84
2.5.60-cfq16 2 60 96.7 80 8.2 1.94
dbench_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 36 155.6 12693 27.8 1.16
2.5.60-cfq0 1 35 157.1 12013 28.6 1.13
2.5.60-cfq16 2 37 151.4 14356 32.4 1.19
--
Jens Axboe
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [BENCHMARK] 2.5.60-cfq with contest
2003-02-11 13:37 ` Jens Axboe
@ 2003-02-11 14:49 ` Jens Axboe
2003-02-12 10:47 ` Con Kolivas
0 siblings, 1 reply; 5+ messages in thread
From: Jens Axboe @ 2003-02-11 14:49 UTC (permalink / raw)
To: Con Kolivas; +Cc: linux kernel mailing list
On Tue, Feb 11 2003, Jens Axboe wrote:
> On Tue, Feb 11 2003, Jens Axboe wrote:
> > > Write based loads hurt. No breakages, but needs tuning.
> >
> > That's not even as bad as I had feared. I'll try to do some tuning with
> > contest locally.
>
> Here are my results, for 2.5.60 vanilla, 2.5.60 + cfq with default
> quantum of 16 (what you tested, too), and 2.5.60 + cfq without quantum
> setting. The latter should be the fairest, only moves one request from
> the pending queues.
Did runs with quantum values of 2,4,8 as well to see how it looks. Often
the dbench runs got screwed, perhaps the signalling changes from 2.5.60
is interfering?
dbench_load.c:72: SYSTEM ERROR: No such process : could not kill pid 4842
Anyways, here are the results:
no_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 31 177.4 0 0.0 1.00
2.5.60-cfq0 2 31 174.2 0 0.0 1.00
2.5.60-cfq16 2 31 177.4 0 0.0 1.00
2.5.60-cfq4 1 32 171.9 0 0.0 1.00
2.5.60-cfq8 2 31 174.2 0 0.0 1.00
cacherun:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 29 182.8 0 0.0 0.94
2.5.60-cfq0 2 28 192.9 0 0.0 0.90
2.5.60-cfq16 2 29 182.8 0 0.0 0.94
2.5.60-cfq4 1 29 186.2 0 0.0 0.91
2.5.60-cfq8 2 29 182.8 0 0.0 0.94
process_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 38 142.1 12 47.4 1.23
2.5.60-cfq0 2 41 129.3 16 61.0 1.32
2.5.60-cfq16 2 37 145.9 12 43.2 1.19
2.5.60-cfq4 1 36 150.0 11 44.4 1.12
2.5.60-cfq8 2 38 142.1 13 47.4 1.23
ctar_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 38 147.4 0 0.0 1.23
2.5.60-cfq0 2 36 155.6 0 0.0 1.16
2.5.60-cfq16 2 36 155.6 0 0.0 1.16
2.5.60-cfq4 1 36 155.6 0 0.0 1.12
2.5.60-cfq8 2 37 151.4 0 0.0 1.19
xtar_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 40 140.0 0 2.5 1.29
2.5.60-cfq0 2 37 148.6 0 2.7 1.19
2.5.60-cfq16 2 40 137.5 0 2.5 1.29
2.5.60-cfq4 1 37 148.6 0 2.7 1.16
2.5.60-cfq8 2 38 147.4 0 2.6 1.23
io_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 93 61.3 2 14.0 3.00
2.5.60-cfq0 4 103 54.4 2 12.6 3.32
2.5.60-cfq16 2 264 21.6 12 19.9 8.52
2.5.60-cfq4 1 97 57.7 3 15.5 3.03
2.5.60-cfq8 2 135 42.2 5 16.3 4.35
read_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 40 140.0 0 5.0 1.29
2.5.60-cfq0 2 39 143.6 0 5.1 1.26
2.5.60-cfq16 2 40 140.0 0 5.0 1.29
2.5.60-cfq4 1 39 143.6 0 5.1 1.22
2.5.60-cfq8 2 40 140.0 0 5.0 1.29
list_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 35 157.1 0 8.6 1.13
2.5.60-cfq0 2 35 160.0 0 8.6 1.13
2.5.60-cfq16 2 35 160.0 0 14.3 1.13
2.5.60-cfq4 1 36 155.6 0 8.3 1.12
2.5.60-cfq8 2 35 160.0 0 11.4 1.13
mem_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 50 116.0 75 10.0 1.61
2.5.60-cfq0 2 57 101.8 78 8.8 1.84
2.5.60-cfq16 2 60 96.7 80 8.2 1.94
2.5.60-cfq4 1 52 111.5 76 9.4 1.62
2.5.60-cfq8 2 50 114.0 75 9.8 1.61
dbench_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.60 2 36 155.6 12693 27.8 1.16
2.5.60-cfq0 1 35 157.1 12013 28.6 1.13
2.5.60-cfq16 2 37 151.4 14356 32.4 1.19
2.5.60-cfq8 1 35 157.1 12174 31.4 1.13
As I initialy expected, without having data to back it up, a non-zero
quantum value helps. 16 was too much though, 4 looks a good choice. At
least here.
--
Jens Axboe
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [BENCHMARK] 2.5.60-cfq with contest
2003-02-11 14:49 ` Jens Axboe
@ 2003-02-12 10:47 ` Con Kolivas
0 siblings, 0 replies; 5+ messages in thread
From: Con Kolivas @ 2003-02-12 10:47 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux kernel mailing list
--- Jens Axboe <axboe@suse.de> wrote: > On Tue, Feb
11 2003, Jens Axboe wrote:
> > On Tue, Feb 11 2003, Jens Axboe wrote:
> > > > Write based loads hurt. No breakages, but
> needs tuning.
> > >
> > > That's not even as bad as I had feared. I'll try
> to do some tuning with
> > > contest locally.
> >
> > Here are my results, for 2.5.60 vanilla, 2.5.60 +
> cfq with default
> > quantum of 16 (what you tested, too), and 2.5.60 +
> cfq without quantum
> > setting. The latter should be the fairest, only
> moves one request from
> > the pending queues.
>
> Did runs with quantum values of 2,4,8 as well to see
> how it looks. Often
> the dbench runs got screwed, perhaps the signalling
> changes from 2.5.60
> is interfering?
>
> dbench_load.c:72: SYSTEM ERROR: No such process :
> could not kill pid 4842
>
> Anyways, here are the results:
>
> no_load:
> Kernel [runs] Time CPU% Loads
> LCPU% Ratio
> 2.5.60 2 31 177.4 0 0.0
> 1.00
> 2.5.60-cfq0 2 31 174.2 0 0.0
> 1.00
> 2.5.60-cfq16 2 31 177.4 0 0.0
> 1.00
> 2.5.60-cfq4 1 32 171.9 0 0.0
> 1.00
> 2.5.60-cfq8 2 31 174.2 0 0.0
> 1.00
> cacherun:
> Kernel [runs] Time CPU% Loads
> LCPU% Ratio
> 2.5.60 2 29 182.8 0 0.0
> 0.94
> 2.5.60-cfq0 2 28 192.9 0 0.0
> 0.90
> 2.5.60-cfq16 2 29 182.8 0 0.0
> 0.94
> 2.5.60-cfq4 1 29 186.2 0 0.0
> 0.91
> 2.5.60-cfq8 2 29 182.8 0 0.0
> 0.94
> process_load:
> Kernel [runs] Time CPU% Loads
> LCPU% Ratio
> 2.5.60 2 38 142.1 12 47.4
> 1.23
> 2.5.60-cfq0 2 41 129.3 16 61.0
> 1.32
> 2.5.60-cfq16 2 37 145.9 12 43.2
> 1.19
> 2.5.60-cfq4 1 36 150.0 11 44.4
> 1.12
> 2.5.60-cfq8 2 38 142.1 13 47.4
> 1.23
> ctar_load:
> Kernel [runs] Time CPU% Loads
> LCPU% Ratio
> 2.5.60 2 38 147.4 0 0.0
> 1.23
> 2.5.60-cfq0 2 36 155.6 0 0.0
> 1.16
> 2.5.60-cfq16 2 36 155.6 0 0.0
> 1.16
> 2.5.60-cfq4 1 36 155.6 0 0.0
> 1.12
> 2.5.60-cfq8 2 37 151.4 0 0.0
> 1.19
> xtar_load:
> Kernel [runs] Time CPU% Loads
> LCPU% Ratio
> 2.5.60 2 40 140.0 0 2.5
> 1.29
> 2.5.60-cfq0 2 37 148.6 0 2.7
> 1.19
> 2.5.60-cfq16 2 40 137.5 0 2.5
> 1.29
> 2.5.60-cfq4 1 37 148.6 0 2.7
> 1.16
> 2.5.60-cfq8 2 38 147.4 0 2.6
> 1.23
> io_load:
> Kernel [runs] Time CPU% Loads
> LCPU% Ratio
> 2.5.60 2 93 61.3 2 14.0
> 3.00
> 2.5.60-cfq0 4 103 54.4 2 12.6
> 3.32
> 2.5.60-cfq16 2 264 21.6 12 19.9
> 8.52
> 2.5.60-cfq4 1 97 57.7 3 15.5
> 3.03
> 2.5.60-cfq8 2 135 42.2 5 16.3
> 4.35
> read_load:
> Kernel [runs] Time CPU% Loads
> LCPU% Ratio
> 2.5.60 2 40 140.0 0 5.0
> 1.29
> 2.5.60-cfq0 2 39 143.6 0 5.1
> 1.26
> 2.5.60-cfq16 2 40 140.0 0 5.0
> 1.29
> 2.5.60-cfq4 1 39 143.6 0 5.1
> 1.22
> 2.5.60-cfq8 2 40 140.0 0 5.0
> 1.29
> list_load:
> Kernel [runs] Time CPU% Loads
> LCPU% Ratio
> 2.5.60 2 35 157.1 0 8.6
> 1.13
> 2.5.60-cfq0 2 35 160.0 0 8.6
> 1.13
> 2.5.60-cfq16 2 35 160.0 0 14.3
> 1.13
> 2.5.60-cfq4 1 36 155.6 0 8.3
> 1.12
> 2.5.60-cfq8 2 35 160.0 0 11.4
> 1.13
> mem_load:
> Kernel [runs] Time CPU% Loads
> LCPU% Ratio
> 2.5.60 2 50 116.0 75 10.0
> 1.61
> 2.5.60-cfq0 2 57 101.8 78 8.8
> 1.84
> 2.5.60-cfq16 2 60 96.7 80 8.2
> 1.94
> 2.5.60-cfq4 1 52 111.5 76 9.4
> 1.62
> 2.5.60-cfq8 2 50 114.0 75 9.8
> 1.61
> dbench_load:
> Kernel [runs] Time CPU% Loads
> LCPU% Ratio
> 2.5.60 2 36 155.6 12693 27.8
> 1.16
> 2.5.60-cfq0 1 35 157.1 12013 28.6
> 1.13
> 2.5.60-cfq16 2 37 151.4 14356 32.4
> 1.19
> 2.5.60-cfq8 1 35 157.1 12174 31.4
> 1.13
>
> As I initialy expected, without having data to back
> it up, a non-zero
> quantum value helps. 16 was too much though, 4 looks
> a good choice. At
> least here.
They look pretty consistent with my results (does any
kernel hacker have a uniprocessor machine!?)
Impressive that changing a single setting without any
other tuning can almost get the same performance as
the "tuned up" current scheduler.
You can choose the loads when running contest by just
appending the loadnames:
contest no_load io_load xtar_load ctar_load
They all need at least one no_load to compare against.
The dbench issue has been fixed by the signal patch
from Linus.
Con
http://greetings.yahoo.com.au - Yahoo! Greetings
- Send some online love this Valentine's Day.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2003-02-12 10:37 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-02-11 10:55 [BENCHMARK] 2.5.60-cfq with contest Con Kolivas
2003-02-11 10:59 ` Jens Axboe
2003-02-11 13:37 ` Jens Axboe
2003-02-11 14:49 ` Jens Axboe
2003-02-12 10:47 ` Con Kolivas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).