linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [BENCHMARK] 2.5.46-mm1 with contest
@ 2002-11-08  0:32 Alan Willis
  2002-11-08  0:48 ` Andrew Morton
  0 siblings, 1 reply; 15+ messages in thread
From: Alan Willis @ 2002-11-08  0:32 UTC (permalink / raw)
  To: linux-kernel

> Why?  We are preempting during the generic file write/read routines, I
> bet, which can otherwise be long periods of latency.  CPU is up and I
> bet the throughput is down, but his test is getting the attention it
> wants.

  I'm curious, would running contest after a fresh boot and with profile=2
provide a profile that tells exactly where time is being spent?  Since
about 2.5.45 I've had some strange slow periods, and starting aterm
would take a while, redrawing windows in X would slow down, it 'feels'
like my workstation becomes a laptop that is just waking up.  Sometimes
this is after only a few minutes of inactivity, or after switching
virtual desktops in kde, or when I have alot of aterm instances running.
 Normal activity for me involves untarring and compiling lots of
software on a regular basis, on a 1.2Ghz celeron and 256mb of mem.  I'm
using 2.5.46+reiser4 patches at the moment.  I'll boot to 2.5.46-mm1
shortly, but I'd love to use reiser4 with akpm's tree though.

Would oprofile help figure out why aterm gets so effing slow at times?
I guess I need to sit down and figure out how to use it.

-alan



^ permalink raw reply	[flat|nested] 15+ messages in thread
* [BENCHMARK] 2.5.46-mm1 with contest
@ 2002-11-07 22:53 Con Kolivas
  2002-11-07 23:48 ` Robert Love
  0 siblings, 1 reply; 15+ messages in thread
From: Con Kolivas @ 2002-11-07 22:53 UTC (permalink / raw)
  To: linux kernel mailing list; +Cc: Andrew Morton

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Here are contest results showing 2.5.46-mm1 with preempt enabled. The other 
kernels have it disabled.

noload:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.44-mm6 [3]          75.7    91      0       0       1.06
2.5.46 [2]              74.1    92      0       0       1.04
2.5.46-mm1 [5]          74.0    93      0       0       1.04

cacherun:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.44-mm6 [3]          69.3    99      0       0       0.97
2.5.46 [2]              67.9    99      0       0       0.95
2.5.46-mm1 [5]          68.9    99      0       0       0.96

process_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.44-mm6 [3]          190.6   36      166     63      2.67
2.5.45 [5]              91.0    75      33      27      1.27
2.5.46 [1]              92.9    74      36      29      1.30
2.5.46-mm1 [5]          82.7    82      21      21      1.16

Much improved 

ctar_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.44-mm6 [3]          97.3    79      1       5       1.36
2.5.46 [1]              98.3    80      1       7       1.38
2.5.46-mm1 [5]          95.3    80      1       5       1.33

xtar_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.44-mm6 [3]          207.6   37      2       7       2.91
2.5.46 [1]              113.5   67      1       8       1.59
2.5.46-mm1 [5]          227.1   34      3       7       3.18

Whatever was causing this to be high in 2.5.44-mm6 is still there now.

io_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.44-mm6 [3]          284.1   28      20      10      3.98
2.5.46 [1]              600.5   13      48      12      8.41
2.5.46-mm1 [5]          134.3   58      6       8       1.88

Big change here. IO load is usually the one we feel the most.

read_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.44-mm6 [3]          104.3   73      7       4       1.46
2.5.46 [1]              103.5   75      7       4       1.45
2.5.46-mm1 [5]          103.2   74      6       4       1.45

list_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.44-mm6 [3]          95.3    75      1       20      1.33
2.5.46 [1]              96.8    74      2       22      1.36
2.5.46-mm1 [5]          101.4   70      1       22      1.42

mem_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.44-mm6 [3]          226.9   33      50      2       3.18
2.5.46 [3]              148.0   51      34      2       2.07
2.5.46-mm1 [5]          180.5   41      35      1       2.53

And this remains relatively high but better than 2.5.44-mm6

Unfortunately I've only run this with preempt enabled so far and I believe 
many of the improvements are showing this effect.

Con.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.0 (GNU/Linux)

iD8DBQE9yu7eF6dfvkL3i1gRAqGIAJ9f6XFfwO0sQOVBn5qZPfAFY5JdlwCggOZt
WXizAEgC23W+AURXApih9xc=
=MCT0
-----END PGP SIGNATURE-----


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2002-11-12 20:07 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-11-08  0:32 [BENCHMARK] 2.5.46-mm1 with contest Alan Willis
2002-11-08  0:48 ` Andrew Morton
2002-11-08 21:08   ` Alan Willis
2002-11-08 21:21     ` Andrew Morton
     [not found]       ` <YWxhbg==.a11f3fbc6d68c50c7f190513c1d3bacf@1037045821.cotse.net>
2002-11-11 21:03         ` Andrew Morton
2002-11-11 21:11           ` Alan Willis
2002-11-11 21:32             ` Andrew Morton
2002-11-13  0:14               ` Denis Vlasenko
2002-11-12 20:07                 ` Alan Willis
  -- strict thread matches above, loose matches on Subject: below --
2002-11-07 22:53 Con Kolivas
2002-11-07 23:48 ` Robert Love
2002-11-07 23:58   ` Andrew Morton
2002-11-08  0:04     ` Robert Love
2002-11-08  6:04       ` Con Kolivas
2002-11-08  0:10     ` Benjamin LaHaise

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).