* [BENCHMARK] 2.5.59-mm8 with contest
@ 2003-02-05 11:21 Con Kolivas
2003-02-05 20:37 ` Andrew Morton
0 siblings, 1 reply; 6+ messages in thread
From: Con Kolivas @ 2003-02-05 11:21 UTC (permalink / raw)
To: linux kernel mailing list; +Cc: Andrew Morton
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Here are contest benchmarks using osdl hardware. More resolution has been
added to the io loads and read load (thanks Aggelos)
no_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 79 94.9 0.0 0.0 1.00
2.5.59-mm7 5 78 96.2 0.0 0.0 1.00
2.5.59-mm8 3 79 93.7 0.0 0.0 1.00
cacherun:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 76 98.7 0.0 0.0 0.96
2.5.59-mm7 5 75 98.7 0.0 0.0 0.96
2.5.59-mm8 3 76 97.4 0.0 0.0 0.96
process_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 92 81.5 28.3 16.3 1.16
2.5.59-mm7 3 95 77.9 33.7 18.9 1.22
2.5.59-mm8 3 195 37.9 205.3 60.5 2.47
seems the scheduler changes have changed the balance of what work is done with
this process load. No cpu cycles are wasted so it is not necessarily a bad
result.
ctar_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 98 80.6 2.0 5.1 1.24
2.5.59-mm7 5 96 80.2 1.4 3.4 1.23
2.5.59-mm8 3 99 78.8 2.0 5.1 1.25
xtar_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 101 75.2 1.0 4.0 1.28
2.5.59-mm7 5 96 79.2 0.8 3.3 1.23
2.5.59-mm8 3 100 77.0 1.0 4.0 1.27
io_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 154 48.7 32.6 12.3 1.95
2.5.59-mm7 3 112 67.0 15.9 7.1 1.44
2.5.59-mm8 3 152 50.0 35.4 13.1 1.92
This seems to be creeping up to the same as 2.5.59
read_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 101 77.2 6.3 5.0 1.28
2.5.59-mm7 3 94 80.9 2.8 2.1 1.21
2.5.59-mm8 3 93 81.7 2.8 2.2 1.18
list_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 95 80.0 0.0 6.3 1.20
2.5.59-mm7 4 94 80.9 0.0 6.4 1.21
2.5.59-mm8 3 98 78.6 0.0 6.1 1.24
mem_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 97 80.4 56.7 2.1 1.23
2.5.59-mm7 4 92 82.6 45.5 1.4 1.18
2.5.59-mm8 3 97 80.4 53.3 2.1 1.23
dbench_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 126 60.3 3.3 22.2 1.59
2.5.59-mm7 4 121 62.0 2.8 24.8 1.55
2.5.59-mm8 3 212 35.8 11.0 47.2 2.68
and this seems to be taking significantly longer
io_other:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 89 84.3 11.2 5.4 1.13
2.5.59-mm7 3 92 81.5 12.6 6.5 1.18
2.5.59-mm8 3 115 67.8 35.2 18.3 1.46
And this load which normally changes little has significantly different
results.
Con
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.0 (GNU/Linux)
iD8DBQE+QPPNF6dfvkL3i1gRAjh9AJ0VrUQBD9SbKX8jQNOtnYlwv0Ud2QCfdU+Q
k6hvNs0RWwIBc4PLSrc5eSo=
=ujgV
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [BENCHMARK] 2.5.59-mm8 with contest
2003-02-05 11:21 [BENCHMARK] 2.5.59-mm8 with contest Con Kolivas
@ 2003-02-05 20:37 ` Andrew Morton
2003-02-06 1:02 ` Nick Piggin
2003-02-06 8:08 ` Con Kolivas
0 siblings, 2 replies; 6+ messages in thread
From: Andrew Morton @ 2003-02-05 20:37 UTC (permalink / raw)
To: Con Kolivas; +Cc: linux kernel mailing list
Con Kolivas wrote:
>
> ..
>
> This seems to be creeping up to the same as 2.5.59
> ...
> and this seems to be taking significantly longer
> ...
> And this load which normally changes little has significantly different
> results.
>
There were no I/O scheduler changes between -mm7 and -mm8. I
demand a recount!
Makefile | 32
arch/alpha/kernel/time.c | 12
arch/arm/kernel/time.c | 8
arch/i386/Kconfig | 18
arch/i386/kernel/Makefile | 3
arch/i386/kernel/apm.c | 16
arch/i386/kernel/io_apic.c | 2
arch/i386/kernel/time.c | 14
arch/i386/mm/hugetlbpage.c | 72
arch/ia64/kernel/time.c | 12
arch/m68k/kernel/time.c | 8
arch/m68knommu/kernel/time.c | 8
arch/mips/au1000/common/time.c | 12
arch/mips/baget/time.c | 8
arch/mips/dec/time.c | 16
arch/mips/ite-boards/generic/time.c | 16
arch/mips/kernel/sysirix.c | 4
arch/mips/kernel/time.c | 12
arch/mips/mips-boards/generic/time.c | 16
arch/mips/philips/nino/time.c | 8
arch/mips64/mips-boards/generic/time.c | 16
arch/mips64/sgi-ip22/ip22-timer.c | 16
arch/mips64/sgi-ip27/ip27-timer.c | 12
arch/parisc/kernel/sys_parisc32.c | 4
arch/parisc/kernel/time.c | 16
arch/ppc/kernel/time.c | 12
arch/ppc/platforms/pmac_time.c | 8
arch/ppc64/kernel/time.c | 16
arch/s390/kernel/time.c | 12
arch/s390x/kernel/time.c | 12
arch/sh/kernel/time.c | 12
arch/sparc/kernel/pcic.c | 8
arch/sparc/kernel/time.c | 12
arch/sparc64/kernel/time.c | 16
arch/um/kernel/time_kern.c | 4
arch/v850/kernel/time.c | 8
arch/x86_64/kernel/time.c | 12
drivers/char/Makefile | 7
drivers/scsi/aic7xxx/aic79xx_osm.c | 18
drivers/scsi/aic7xxx/aic79xx_osm.h | 3
drivers/scsi/aic7xxx/aic7xxx_osm.c | 15
drivers/scsi/aic7xxx/aic7xxx_osm.h | 3
drivers/scsi/scsi_error.c | 98
fs/exec.c | 4
fs/fs-writeback.c | 12
fs/hugetlbfs/inode.c | 227
fs/super.c | 138
include/linux/hugetlb.h | 10
include/linux/module.h | 2
include/linux/sched.h | 47
include/linux/sysctl.h | 4
include/linux/time.h | 4
init/Kconfig | 3
init/main.c | 14
kernel/ksyms.c | 8
kernel/module.c | 53
kernel/sched.c | 512
kernel/sysctl.c | 4
kernel/time.c | 19
kernel/timer.c | 6
mm/Makefile | 2
mm/memory.c | 10
mm/mmap.c | 5
mm/page_alloc.c | 5
scripts/Makefile.build | 29
scripts/Makefile.lib | 1
scripts/Makefile.modver | 18
sound/pci/rme9652/hammerfall_mem.c | 7
sound/sound_firmware.c |30559 +++++++++++++++++++++------------
69 files changed, 21001 insertions(+), 11339 deletions(-)
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [BENCHMARK] 2.5.59-mm8 with contest
2003-02-05 20:37 ` Andrew Morton
@ 2003-02-06 1:02 ` Nick Piggin
2003-02-06 8:08 ` Con Kolivas
1 sibling, 0 replies; 6+ messages in thread
From: Nick Piggin @ 2003-02-06 1:02 UTC (permalink / raw)
To: Andrew Morton; +Cc: Con Kolivas, linux kernel mailing list
Andrew Morton wrote:
>Con Kolivas wrote:
>
>>..
>>
>>This seems to be creeping up to the same as 2.5.59
>>...
>>and this seems to be taking significantly longer
>>...
>>And this load which normally changes little has significantly different
>>results.
>>
>>
>
>There were no I/O scheduler changes between -mm7 and -mm8. I
>demand a recount!
>
It would suggest process scheduler changes are making the
difference.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [BENCHMARK] 2.5.59-mm8 with contest
2003-02-05 20:37 ` Andrew Morton
2003-02-06 1:02 ` Nick Piggin
@ 2003-02-06 8:08 ` Con Kolivas
2003-02-07 8:22 ` Andrew Morton
1 sibling, 1 reply; 6+ messages in thread
From: Con Kolivas @ 2003-02-06 8:08 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux kernel mailing list
On Thu, 6 Feb 2003 07:37 am, Andrew Morton wrote:
> Con Kolivas wrote:
> > ..
> >
> > This seems to be creeping up to the same as 2.5.59
> > ...
> > and this seems to be taking significantly longer
> > ...
> > And this load which normally changes little has significantly different
> > results.
>
> There were no I/O scheduler changes between -mm7 and -mm8. I
> demand a recount!
Repeated mm7 and mm8.
Recount-One for Martin, two for Martin. Same results; not the i/o scheduler
responsible for the changes, but I have a sneaking suspicion another
scheduler may be.
Con
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [BENCHMARK] 2.5.59-mm8 with contest
2003-02-06 8:08 ` Con Kolivas
@ 2003-02-07 8:22 ` Andrew Morton
2003-02-07 10:26 ` Con Kolivas
0 siblings, 1 reply; 6+ messages in thread
From: Andrew Morton @ 2003-02-07 8:22 UTC (permalink / raw)
To: Con Kolivas; +Cc: linux-kernel
Con Kolivas <conman@kolivas.net> wrote:
>
> On Thu, 6 Feb 2003 07:37 am, Andrew Morton wrote:
> > Con Kolivas wrote:
> > > ..
> > >
> > > This seems to be creeping up to the same as 2.5.59
> > > ...
> > > and this seems to be taking significantly longer
> > > ...
> > > And this load which normally changes little has significantly different
> > > results.
> >
> > There were no I/O scheduler changes between -mm7 and -mm8. I
> > demand a recount!
>
> Repeated mm7 and mm8.
> Recount-One for Martin, two for Martin. Same results; not the i/o scheduler
> responsible for the changes, but I have a sneaking suspicion another
> scheduler may be.
Not sure.
With contest 0.60, io_load, ext3, with the scheduler changes:
Finished compiling kernel: elapsed: 161 user: 181 system: 16
Finished io_load: elapsed: 162 user: 0 system: 17 loads: 9
Finished compiling kernel: elapsed: 155 user: 179 system: 15
Finished io_load: elapsed: 155 user: 0 system: 17 loads: 9
Finished compiling kernel: elapsed: 166 user: 180 system: 15
Finished io_load: elapsed: 166 user: 0 system: 18 loads: 10
With the CPU scheduler changes backed out:
Finished compiling kernel: elapsed: 137 user: 181 system: 14
Finished io_load: elapsed: 138 user: 0 system: 9 loads: 5
Finished compiling kernel: elapsed: 142 user: 181 system: 14
Finished io_load: elapsed: 142 user: 0 system: 9 loads: 5
Finished compiling kernel: elapsed: 133 user: 181 system: 15
Finished io_load: elapsed: 133 user: 0 system: 12 loads: 7
So there's some diminution there, not a lot.
With no_load:
Finished compiling kernel: elapsed: 108 user: 179 system: 12
Finished no_load: elapsed: 108 user: 7 system: 12 loads: 0
Finished compiling kernel: elapsed: 107 user: 179 system: 13
Finished no_load: elapsed: 107 user: 7 system: 12 loads: 0
Finished compiling kernel: elapsed: 110 user: 178 system: 12
Finished no_load: elapsed: 110 user: 8 system: 14 loads: 0
It's very good either way. Probably with the scheduler changes we're
hitting a better balance.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [BENCHMARK] 2.5.59-mm8 with contest
2003-02-07 8:22 ` Andrew Morton
@ 2003-02-07 10:26 ` Con Kolivas
0 siblings, 0 replies; 6+ messages in thread
From: Con Kolivas @ 2003-02-07 10:26 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel
On Fri, 7 Feb 2003 07:22 pm, Andrew Morton wrote:
> Con Kolivas <conman@kolivas.net> wrote:
> > On Thu, 6 Feb 2003 07:37 am, Andrew Morton wrote:
> > > Con Kolivas wrote:
> > > > ..
> > > >
> > > > This seems to be creeping up to the same as 2.5.59
> > > > ...
> > > > and this seems to be taking significantly longer
> > > > ...
> > > > And this load which normally changes little has significantly
> > > > different results.
> > >
> > > There were no I/O scheduler changes between -mm7 and -mm8. I
> > > demand a recount!
> >
> > Repeated mm7 and mm8.
> > Recount-One for Martin, two for Martin. Same results; not the i/o
> > scheduler responsible for the changes, but I have a sneaking suspicion
> > another scheduler may be.
>
> Not sure.
>
> With contest 0.60, io_load, ext3, with the scheduler changes:
>
> Finished compiling kernel: elapsed: 161 user: 181 system: 16
> Finished io_load: elapsed: 162 user: 0 system: 17 loads: 9
> Finished compiling kernel: elapsed: 155 user: 179 system: 15
> Finished io_load: elapsed: 155 user: 0 system: 17 loads: 9
> Finished compiling kernel: elapsed: 166 user: 180 system: 15
> Finished io_load: elapsed: 166 user: 0 system: 18 loads: 10
>
average of 162/108 (io_load over no_load) prolongs it 50% for one process
(disk writing) when eight processes are running (kernel compile)
on my uniprocessor results it's 92% prolongation for one v four
>
> With the CPU scheduler changes backed out:
>
> Finished compiling kernel: elapsed: 137 user: 181 system: 14
> Finished io_load: elapsed: 138 user: 0 system: 9 loads: 5
> Finished compiling kernel: elapsed: 142 user: 181 system: 14
> Finished io_load: elapsed: 142 user: 0 system: 9 loads: 5
> Finished compiling kernel: elapsed: 133 user: 181 system: 15
> Finished io_load: elapsed: 133 user: 0 system: 12 loads: 7
>
> So there's some diminution there, not a lot.
average of 133/108 prolongs it 27% for one process when eight processes are
running.
on my results it's 44% prolongation
> With no_load:
>
> Finished compiling kernel: elapsed: 108 user: 179 system: 12
> Finished no_load: elapsed: 108 user: 7 system: 12 loads: 0
> Finished compiling kernel: elapsed: 107 user: 179 system: 13
> Finished no_load: elapsed: 107 user: 7 system: 12 loads: 0
> Finished compiling kernel: elapsed: 110 user: 178 system: 12
> Finished no_load: elapsed: 110 user: 8 system: 14 loads: 0
>
>
> It's very good either way. Probably with the scheduler changes we're
> hitting a better balance.
I would have thought that the one disk write heavy process is getting more
than the lion's share with the new scheduler changes, and the mm7 results
were fairer?
Con
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2003-02-07 10:16 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-02-05 11:21 [BENCHMARK] 2.5.59-mm8 with contest Con Kolivas
2003-02-05 20:37 ` Andrew Morton
2003-02-06 1:02 ` Nick Piggin
2003-02-06 8:08 ` Con Kolivas
2003-02-07 8:22 ` Andrew Morton
2003-02-07 10:26 ` Con Kolivas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).