linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [BENCHMARK] 2.5.40-mm2 with contest
@ 2002-10-07 12:11 Ed Tomlinson
  0 siblings, 0 replies; 10+ messages in thread
From: Ed Tomlinson @ 2002-10-07 12:11 UTC (permalink / raw)
  To: linux-kernel; +Cc: Andrew Morton, Con Kolivas

Hi,

Actually at 50 it swaps a lot less.  This morning after the daily updatedb run, there
was nothing in swap.  There was alway stuff in swap after this in both 2.5 and 2.4.x...

Not sure if limiting swap is that good an idea.

Could we report on the peak swap usage and swap rates in the benchmark?  Note that
in most cases it the elasped time that is important, it would be a good idea to know 
how much we are swapping since this _might_ effect the bottom line.

Ed Tomlinson



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [BENCHMARK] 2.5.40-mm2 with contest
  2002-10-10 17:40         ` Bill Davidsen
@ 2002-10-10 23:17           ` Con Kolivas
  0 siblings, 0 replies; 10+ messages in thread
From: Con Kolivas @ 2002-10-10 23:17 UTC (permalink / raw)
  To: Bill Davidsen; +Cc: Andrew Morton, linux kernel mailing list

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Friday 11 Oct 2002 3:40 am, Bill Davidsen wrote:
> On Tue, 8 Oct 2002, Con Kolivas wrote:
> > > Problem is, users have said they don't want that.  They say that they
> > > want to copy ISO images about all day and not swap.  I think.
> >
> > But do they really want that or do they think they want that without
> > knowing the consequences of such a setting?
>
> I have been able to tune bdflush in 2.4-aa kernels to be much more
> aggressive about moving data to disk under write pressure, and that has
> been a big plus in terms of both getting the write completed in less real
> time and fewer big pauses doing trivial things like uncovering a window. I
> see less swap used.
>
> > > It worries me.  It means that we'll be really slow to react to sudden
> > > load swings, and it increases the complexity of the analysis and
> > > testing.  And I really do want to give the user a single knob,
> > > which has understandable semantics and for which I can feasibly test
> > > all operating regions.
> > >
> > > I really, really, really, really don't want to get too fancy in there.
> >
> > Well I made it as simple as I possibly could. It seems to do what they
> > want (not swappy) but not at the expense of making the machine never
> > swapping when it really needs to - and the performance seems to be better
> > all round in real usage. I guess the only thing is it isn't a fixed
> > number... unless we set a maximum swappiness level or... but then it
> > starts getting unnecessarily complicated with questionable benefits.
>
> I'm going to try this patch, but building a kernel on my standard test
> machine is painfully slow, so it will come after 41-ac2. It appears to
> address the balance issue dynamically.
>

I've been playing with the feedback loop a bit more and made it respond more 
relative to the pressure present, and not have the "magic number" of 10 times 
the gain on the positive arm. I'm getting good results with it. Check it out 
below. I think if you want to give the users a "knob" then limiting the 
max_vm_swappiness rather than the current vm_swappiness would work.

> > > I have changed this code a bit, and have added other things.  Mainly
> > > over on the writer throttling side, which tends to be the place where
> > > the stress comes from in the first place.
> >
> > /me waits but is a little disappointed
>
> I actually like the idea of writer throttling, I just wonder how it will
> work at the corner cases like only one big writer (mkisofs) or the
> alternative, lots of little writers.

Worth trying it out I guess.

- --- linux-2.5.41/mm/vmscan.c    2002-10-11 09:11:20.000000000 +1000
+++ linux-2.5.41-new/mm/vmscan.c 2002-10-11 00:51:06.000000000 +1000
@@ -44,7 +44,8 @@
 /*
  * From 0 .. 100.  Higher means more swappy.
  */
- -int vm_swappiness = 50;
+int vm_swappiness = 0;
+int vm_swap_feedback;
 static long total_memory;

 #ifdef ARCH_HAS_PREFETCH
@@ -587,7 +588,18 @@
         * A 100% value of vm_swappiness will override this algorithm almost
         * altogether.
         */
- -       swap_tendency = mapped_ratio / 2 + distress + vm_swappiness;
+       swap_tendency = mapped_ratio / 2 + distress;
+
+        vm_swap_feedback = (swap_tendency - 50)/10;
+        vm_swappiness += vm_swap_feedback;
+        if (vm_swappiness < 0){
+               vm_swappiness = 0;
+       }
+       else
+       if (vm_swappiness > 100){
+               vm_swappiness = 100;
+       }
+        swap_tendency += vm_swappiness;

        /*
         * Well that all made sense.  Now for some magic numbers.  Use the




Con
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.7 (GNU/Linux)

iD8DBQE9pgqLF6dfvkL3i1gRAo87AKCT2QL/4yihdGoRQZFxFL4/Az9RNgCeJjhJ
Xew01Xff9t31p9Bi2TODj44=
=OEcn
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [BENCHMARK] 2.5.40-mm2 with contest
  2002-10-10 17:32       ` Bill Davidsen
@ 2002-10-10 18:11         ` Andrew Morton
  0 siblings, 0 replies; 10+ messages in thread
From: Andrew Morton @ 2002-10-10 18:11 UTC (permalink / raw)
  To: Bill Davidsen; +Cc: Con Kolivas, linux kernel mailing list

Bill Davidsen wrote:
> 
> On Mon, 7 Oct 2002, Andrew Morton wrote:
> 
> > Problem is, users have said they don't want that.  They say that they
> > want to copy ISO images about all day and not swap.  I think.
> >
> > It worries me.  It means that we'll be really slow to react to sudden
> > load swings, and it increases the complexity of the analysis and
> > testing.  And I really do want to give the user a single knob,
> > which has understandable semantics and for which I can feasibly test
> > all operating regions.
> >
> > I really, really, really, really don't want to get too fancy in there.
> 
> It is really desirable to improve write intense performance in 2.5. My
> response benchmark shows that 2.5.xx is seriously worse under heavy write
> load than 2.4.

2.5 and 2.5-mm are very different in this area.  You did not specify.

> And in 2.4 it is desirable to do tuning of bdflush for
> write loads, to keep performance up in -aa kernels. Andrea was kind enough
> to provide me some general hints in this area.
> 
> Here's what I think is happening.
> 
> 1 - the kernel is buffering too much data in the hope that it will
> possibly be reread. This is fine, but it results in swapping a lot of
> programs to make room, and finally a big cleanup to disk, which
> triggers...

This is why 2.5.41-mm2 has improved writer throttling, and it's
why it adjusts the throttling threshold down when the amount
of mapped memory is high.
 
> 2 - without the io scheduler having a bunch of writes has a very bad
> effect on read performance, including swap-in. While it's hard to be sure,
> I think I see a program getting a fault to page in a data page (while
> massive write load is present) and while blocked some of the code pages
> are released.

Yes, that happens quite a lot.
 
> I think there's room for improving the performance, as the "swappiness"
> patch shows. I played with trying to block a process after it had a
> certain amount of data buffered for write, but it didn't do what I wanted.
> I think the total buffered data in the system needs to be considered as
> well.

It does.  The throttling of write(2) callers is a critical part
of the VM.   Large amounts of dirty data cause lots of problems.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [BENCHMARK] 2.5.40-mm2 with contest
  2002-10-08  1:41       ` Con Kolivas
@ 2002-10-10 17:40         ` Bill Davidsen
  2002-10-10 23:17           ` Con Kolivas
  0 siblings, 1 reply; 10+ messages in thread
From: Bill Davidsen @ 2002-10-10 17:40 UTC (permalink / raw)
  To: Con Kolivas; +Cc: Andrew Morton, linux kernel mailing list

On Tue, 8 Oct 2002, Con Kolivas wrote:

> > Problem is, users have said they don't want that.  They say that they
> > want to copy ISO images about all day and not swap.  I think.
> 
> But do they really want that or do they think they want that without knowing the
> consequences of such a setting?

I have been able to tune bdflush in 2.4-aa kernels to be much more
aggressive about moving data to disk under write pressure, and that has
been a big plus in terms of both getting the write completed in less real
time and fewer big pauses doing trivial things like uncovering a window. I
see less swap used.

> 
> > It worries me.  It means that we'll be really slow to react to sudden
> > load swings, and it increases the complexity of the analysis and
> > testing.  And I really do want to give the user a single knob,
> > which has understandable semantics and for which I can feasibly test
> > all operating regions.
> > 
> > I really, really, really, really don't want to get too fancy in there.
> 
> Well I made it as simple as I possibly could. It seems to do what they want (not
> swappy) but not at the expense of making the machine never swapping when it
> really needs to - and the performance seems to be better all round in real
> usage. I guess the only thing is it isn't a fixed number... unless we set a
> maximum swappiness level or... but then it starts getting unnecessarily
> complicated with questionable benefits.

I'm going to try this patch, but building a kernel on my standard test
machine is painfully slow, so it will come after 41-ac2. It appears to
address the balance issue dynamically.
 
> > I have changed this code a bit, and have added other things.  Mainly
> > over on the writer throttling side, which tends to be the place where
> > the stress comes from in the first place.
> 
> /me waits but is a little disappointed

I actually like the idea of writer throttling, I just wonder how it will
work at the corner cases like only one big writer (mkisofs) or the
alternative, lots of little writers. 

-- 
bill davidsen <davidsen@tmr.com>
  CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [BENCHMARK] 2.5.40-mm2 with contest
  2002-10-08  1:25     ` Andrew Morton
  2002-10-08  1:41       ` Con Kolivas
@ 2002-10-10 17:32       ` Bill Davidsen
  2002-10-10 18:11         ` Andrew Morton
  1 sibling, 1 reply; 10+ messages in thread
From: Bill Davidsen @ 2002-10-10 17:32 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Con Kolivas, linux kernel mailing list

On Mon, 7 Oct 2002, Andrew Morton wrote:

> Problem is, users have said they don't want that.  They say that they
> want to copy ISO images about all day and not swap.  I think.
> 
> It worries me.  It means that we'll be really slow to react to sudden
> load swings, and it increases the complexity of the analysis and
> testing.  And I really do want to give the user a single knob,
> which has understandable semantics and for which I can feasibly test
> all operating regions.
> 
> I really, really, really, really don't want to get too fancy in there.

It is really desirable to improve write intense performance in 2.5. My
response benchmark shows that 2.5.xx is seriously worse under heavy write
load than 2.4. And in 2.4 it is desirable to do tuning of bdflush for
write loads, to keep performance up in -aa kernels. Andrea was kind enough
to provide me some general hints in this area.

Here's what I think is happening.

1 - the kernel is buffering too much data in the hope that it will
possibly be reread. This is fine, but it results in swapping a lot of
programs to make room, and finally a big cleanup to disk, which
triggers...

2 - without the io scheduler having a bunch of writes has a very bad
effect on read performance, including swap-in. While it's hard to be sure,
I think I see a program getting a fault to page in a data page (while
massive write load is present) and while blocked some of the code pages
are released.

I think there's room for improving the performance, as the "swappiness"
patch shows. I played with trying to block a process after it had a
certain amount of data buffered for write, but it didn't do what I wanted.
I think the total buffered data in the system needs to be considered as
well.

I believe one of the people who actually works on this stuff regularly has
mentioned this, I don't find the post quickly.

-- 
bill davidsen <davidsen@tmr.com>
  CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [BENCHMARK] 2.5.40-mm2 with contest
  2002-10-08  1:25     ` Andrew Morton
@ 2002-10-08  1:41       ` Con Kolivas
  2002-10-10 17:40         ` Bill Davidsen
  2002-10-10 17:32       ` Bill Davidsen
  1 sibling, 1 reply; 10+ messages in thread
From: Con Kolivas @ 2002-10-08  1:41 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux kernel mailing list

Quoting Andrew Morton <akpm@digeo.com>:

> Con Kolivas wrote:
> > 
> > ...
> > -       swap_tendency = mapped_ratio / 2 + distress + vm_swappiness;
> > +       swap_tendency = mapped_ratio / 2 + distress ;
> > +       if (swap_tendency > 50){
> > +               if (vm_swappiness <= 990) vm_swappiness+=10;
> > +               }
> > +               else
> > +               if (vm_swappiness > 0) vm_swappiness--;
> > +       swap_tendency += (vm_swappiness / 10);
> >
> 
> heh, that could work.  So basically you're saying "the longer we're
> under swap stress, the more swappy we want to get".

Exactly, which made complete sense to me.

> 
> Problem is, users have said they don't want that.  They say that they
> want to copy ISO images about all day and not swap.  I think.

But do they really want that or do they think they want that without knowing the
consequences of such a setting?

> It worries me.  It means that we'll be really slow to react to sudden
> load swings, and it increases the complexity of the analysis and
> testing.  And I really do want to give the user a single knob,
> which has understandable semantics and for which I can feasibly test
> all operating regions.
> 
> I really, really, really, really don't want to get too fancy in there.

Well I made it as simple as I possibly could. It seems to do what they want (not
swappy) but not at the expense of making the machine never swapping when it
really needs to - and the performance seems to be better all round in real
usage. I guess the only thing is it isn't a fixed number... unless we set a
maximum swappiness level or... but then it starts getting unnecessarily
complicated with questionable benefits.

> I have changed this code a bit, and have added other things.  Mainly
> over on the writer throttling side, which tends to be the place where
> the stress comes from in the first place.

/me waits but is a little disappointed

Con

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [BENCHMARK] 2.5.40-mm2 with contest
  2002-10-08  1:01   ` Con Kolivas
@ 2002-10-08  1:25     ` Andrew Morton
  2002-10-08  1:41       ` Con Kolivas
  2002-10-10 17:32       ` Bill Davidsen
  0 siblings, 2 replies; 10+ messages in thread
From: Andrew Morton @ 2002-10-08  1:25 UTC (permalink / raw)
  To: Con Kolivas; +Cc: linux kernel mailing list

Con Kolivas wrote:
> 
> ...
> -       swap_tendency = mapped_ratio / 2 + distress + vm_swappiness;
> +       swap_tendency = mapped_ratio / 2 + distress ;
> +       if (swap_tendency > 50){
> +               if (vm_swappiness <= 990) vm_swappiness+=10;
> +               }
> +               else
> +               if (vm_swappiness > 0) vm_swappiness--;
> +       swap_tendency += (vm_swappiness / 10);
>

heh, that could work.  So basically you're saying "the longer we're
under swap stress, the more swappy we want to get".

Problem is, users have said they don't want that.  They say that they
want to copy ISO images about all day and not swap.  I think.

It worries me.  It means that we'll be really slow to react to sudden
load swings, and it increases the complexity of the analysis and
testing.  And I really do want to give the user a single knob,
which has understandable semantics and for which I can feasibly test
all operating regions.

I really, really, really, really don't want to get too fancy in there.

I have changed this code a bit, and have added other things.  Mainly
over on the writer throttling side, which tends to be the place where
the stress comes from in the first place.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [BENCHMARK] 2.5.40-mm2 with contest
  2002-10-07  7:38 ` Andrew Morton
@ 2002-10-08  1:01   ` Con Kolivas
  2002-10-08  1:25     ` Andrew Morton
  0 siblings, 1 reply; 10+ messages in thread
From: Con Kolivas @ 2002-10-08  1:01 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux kernel mailing list

Andrew

Quoting Andrew Morton <akpm@digeo.com>:

> -mm2 has the "don't swap so much" knob.  By default it is set at 50%.
> The VM _wants_ to reclaim lots of memory from mem_load so that
> gcc has some cache to chew on.  But you the operator have said
> "I know better and I don't want you to do that".
> 
> Because it is prevented from building enough cache, gcc is issuing
> a ton of reads, which are hampering the swapstorm which is happening
> at the other end of the disk.  It's a lose-lose.
> 
> There's not much that can be done about that really (apart from
> some heavy-handed load control) - if you want to optimise for
> throughput above all else,
> 
> 	echo 100 > /proc/sys/vm/swappiness
> 
> (I suspect our swap performance right now is fairly poor, in terms
> of block allocation, readaround, etc.  Nobody has looked at that in
> some time afaik.  But tuning in there is unlikely to make a huge
> difference).


I like the idea of the swappiness switch. It seems to me that this shouldn't be
a magic number though. I've experimented with making it auto-regulating and
found that with a positive feedback arm being ten times greater than the
negative feedback arm it gives good results. Here is a patch describing that:

--- vmscan.old  2002-10-08 10:45:45.000000000 +1000
+++ vmscan.c    2002-10-08 10:48:35.000000000 +1000
@@ -44,7 +44,7 @@
 /*
  * From 0 .. 100.  Higher means more swappy.
  */
-int vm_swappiness = 50;
+int vm_swappiness = 0;

 #ifdef ARCH_HAS_PREFETCH
 #define prefetch_prev_lru_page(_page, _base, _field)                   \
@@ -535,7 +535,13 @@
         * A 100% value of vm_swappiness will override this algorithm almost
         * altogether.
         */
-       swap_tendency = mapped_ratio / 2 + distress + vm_swappiness;
+       swap_tendency = mapped_ratio / 2 + distress ;
+       if (swap_tendency > 50){
+               if (vm_swappiness <= 990) vm_swappiness+=10;
+               }
+               else
+               if (vm_swappiness > 0) vm_swappiness--;
+       swap_tendency += (vm_swappiness / 10);
        if (akpm_print) printk(" st:%ld\n", swap_tendency);

        if (akpm_print) printk("\n");

----------------------

And here are the results I have obtained with that (mm2v is mm2 with variable
vm_swappiness):

noload:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.40-mm1 [1]          72.9    93      0       0       1.09
2.5.40-mm2 [1]          72.2    93      0       0       1.07
2.5.40-mm2v [2]         73.1    92      0       0       1.09

process_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.40-mm1 [2]          86.9    77      30      25      1.29
2.5.40-mm2 [1]          98.0    69      45      33      1.46
2.5.40-mm2v [2]         85.6    77      29      25      1.27

tarc_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.40-mm1 [1]          94.4    81      1       6       1.41
2.5.40-mm2 [1]          91.9    82      1       6       1.37
2.5.40-mm2v [2]         91.2    82      1       6       1.36

tarx_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.40-mm1 [1]          191.5   39      3       7       2.85
2.5.40-mm2 [1]          188.1   39      3       7       2.80
2.5.40-mm2v [2]         174.6   46      2       7       2.60

io_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.40-mm1 [1]          326.2   24      23      11      4.86
2.5.40-mm2 [2]          208.0   38      12      10      3.10
2.5.40-mm2v [3]         254.0   31      15      10      3.78

read_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.40-mm1 [1]          104.5   74      9       5       1.56
2.5.40-mm2 [1]          102.7   75      7       4       1.53
2.5.40-mm2v [2]         105.0   72      7       4       1.56

lslr_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.40-mm1 [1]          96.6    73      1       22      1.44
2.5.40-mm2 [1]          94.3    75      1       21      1.40
2.5.40-mm2v [2]         97.9    71      1       20      1.46

mem_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.5.40-mm1 [2]          107.7   68      29      2       1.60
2.5.40-mm2 [2]          165.1   44      38      2       2.46
2.5.40-mm2v [3]         118.1   62      30      2       1.76

Most of the time it seems to hover around the 500 number during normal use
(equivalent to 50 of the original vm_swappiness).

What do you think?
Con

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [BENCHMARK] 2.5.40-mm2 with contest
  2002-10-07  3:21 Con Kolivas
@ 2002-10-07  7:38 ` Andrew Morton
  2002-10-08  1:01   ` Con Kolivas
  0 siblings, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2002-10-07  7:38 UTC (permalink / raw)
  To: Con Kolivas; +Cc: linux kernel mailing list

Con Kolivas wrote:
> 
> 
> Here are the latest results including 2.5.40-mm2 with contest v0.50
> (http://contest.kolivas.net)
> 
> ...
> 
> mem_load:
> Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
> 2.4.19 [3]              100.0   72      33      3       1.49
> 2.5.38 [3]              107.3   70      34      3       1.60
> 2.5.39 [2]              103.1   72      31      3       1.53
> 2.5.40 [2]              102.5   72      31      3       1.53
> 2.5.40-mm1 [2]          107.7   68      29      2       1.60
> 2.5.40-mm2 [2]          165.1   44      38      2       2.46
> 

-mm2 has the "don't swap so much" knob.  By default it is set at 50%.
The VM _wants_ to reclaim lots of memory from mem_load so that
gcc has some cache to chew on.  But you the operator have said
"I know better and I don't want you to do that".

Because it is prevented from building enough cache, gcc is issuing
a ton of reads, which are hampering the swapstorm which is happening
at the other end of the disk.  It's a lose-lose.

There's not much that can be done about that really (apart from
some heavy-handed load control) - if you want to optimise for
throughput above all else,

	echo 100 > /proc/sys/vm/swappiness

(I suspect our swap performance right now is fairly poor, in terms
of block allocation, readaround, etc.  Nobody has looked at that in
some time afaik.  But tuning in there is unlikely to make a huge
difference).

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [BENCHMARK] 2.5.40-mm2 with contest
@ 2002-10-07  3:21 Con Kolivas
  2002-10-07  7:38 ` Andrew Morton
  0 siblings, 1 reply; 10+ messages in thread
From: Con Kolivas @ 2002-10-07  3:21 UTC (permalink / raw)
  To: linux kernel mailing list; +Cc: Andrew Morton

 
Here are the latest results including 2.5.40-mm2 with contest v0.50 
(http://contest.kolivas.net) 
 
noload: 
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio 
2.4.19 [3]              67.7    98      0       0       1.01 
2.5.38 [3]              72.0    93      0       0       1.07 
2.5.39 [2]              72.2    93      0       0       1.07 
2.5.40 [1]              72.5    93      0       0       1.08 
2.5.40-mm1 [1]          72.9    93      0       0       1.09 
2.5.40-mm2 [1]          72.2    93      0       0       1.07 
 
process_load: 
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio 
2.4.19 [3]              106.5   59      112     43      1.59 
2.5.38 [3]              89.5    74      34      28      1.33 
2.5.39 [2]              91.2    73      36      28      1.36 
2.5.40 [2]              82.8    80      25      23      1.23 
2.5.40-mm1 [2]          86.9    77      30      25      1.29 
2.5.40-mm2 [1]          98.0    69      45      33      1.46 
 
io_load: 
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio 
2.4.19 [3]              492.6   14      38      10      7.33 
2.5.38 [1]              4000.0  1       500     1       59.55 
2.5.39 [2]              423.9   18      30      11      6.31 
2.5.40 [1]              315.7   25      22      10      4.70 
2.5.40-mm1 [1]          326.2   24      23      11      4.86 
2.5.40-mm2 [2]          208.0   38      12      10      3.10 
 
mem_load: 
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio 
2.4.19 [3]              100.0   72      33      3       1.49 
2.5.38 [3]              107.3   70      34      3       1.60 
2.5.39 [2]              103.1   72      31      3       1.53 
2.5.40 [2]              102.5   72      31      3       1.53 
2.5.40-mm1 [2]          107.7   68      29      2       1.60 
2.5.40-mm2 [2]          165.1   44      38      2       2.46 
 
Well something happened here. The tuning under IO load has relaxed the 
pressure even more to allow kernel compilation to proceed. Mem load seems to 
have changed dramatically though with a disproportionately long increase in 
kernel compilation time given only a modest increase in the amount of work 
done by mem load. Process_load seems proportionately longer than mm1 with an 
appropriate rise in load work done. 
 
Below are also the experimental results with the newer loads: 
 
tarc_load: 
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio 
2.4.19 [2]              106.5   70      1       8       1.59 
2.5.38 [1]              97.2    79      1       6       1.45 
2.5.39 [1]              91.8    83      1       6       1.37 
2.5.40 [1]              96.9    80      1       6       1.44 
2.5.40-mm1 [1]          94.4    81      1       6       1.41 
2.5.40-mm2 [1]          91.9    82      1       6       1.37 
 
tarx_load: 
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio 
2.4.19 [1]              132.4   55      2       9       1.97 
2.5.38 [1]              120.5   63      2       8       1.79 
2.5.39 [1]              108.3   69      1       6       1.61 
2.5.40 [1]              110.7   68      1       6       1.65 
2.5.40-mm1 [1]          191.5   39      3       7       2.85 
2.5.40-mm2 [1]          188.1   39      3       7       2.80 
 
read_load: 
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio 
2.4.19 [2]              134.1   54      14      5       2.00 
2.5.38 [2]              100.5   76      9       5       1.50 
2.5.39 [2]              101.3   74      14      6       1.51 
2.5.40 [1]              101.5   73      13      5       1.51 
2.5.40-mm1 [1]          104.5   74      9       5       1.56 
2.5.40-mm2 [1]          102.7   75      7       4       1.53 
 
lslr_load: 
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio 
2.4.19 [1]              89.8    77      1       20      1.34 
2.5.38 [1]              99.1    71      1       20      1.48 
2.5.39 [1]              101.3   70      2       24      1.51 
2.5.40 [1]              97.0    72      1       21      1.44 
2.5.40-mm1 [1]          96.6    73      1       22      1.44 
2.5.40-mm2 [1]          94.3    75      1       21      1.40 
 
These do not appear significantly different. 
 
Con 

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2002-10-10 23:14 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-10-07 12:11 [BENCHMARK] 2.5.40-mm2 with contest Ed Tomlinson
  -- strict thread matches above, loose matches on Subject: below --
2002-10-07  3:21 Con Kolivas
2002-10-07  7:38 ` Andrew Morton
2002-10-08  1:01   ` Con Kolivas
2002-10-08  1:25     ` Andrew Morton
2002-10-08  1:41       ` Con Kolivas
2002-10-10 17:40         ` Bill Davidsen
2002-10-10 23:17           ` Con Kolivas
2002-10-10 17:32       ` Bill Davidsen
2002-10-10 18:11         ` Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).