linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch] trivial change in kernel/sched.c in 2.6.0-test9+
@ 2003-11-26  5:27 Pat Erley
  2003-11-26  5:55 ` Muli Ben-Yehuda
  0 siblings, 1 reply; 5+ messages in thread
From: Pat Erley @ 2003-11-26  5:27 UTC (permalink / raw)
  To: linux-kernel

this ends up saving a few math operations any time a child
process exits. ( calling sched_exit(task_t * p) )

here's my exact comment on the contents of the patch (left
out of the actual patch)

    /*
     * the funcion below was origionally this, for anyone
     * wondering what I changed.  I mearly used some algebra
     * to factor out a 1 / (EXIT_WEIGHT + 1)
     *
     *      p->parent->sleep_avg = p->parent->sleep_avg /
     *      (EXIT_WEIGHT + 1) * EXIT_WEIGHT + p->sleep_avg /
     *      (EXIT_WEIGHT + 1);
     *
     * the only possible effects I see this having are:
     *
     *    1. less math operations for each child process exiting
     *    2. higher accuracy in the value of p->parent->sleep_avg
     *       due to using only 1 division over 2
     *
     */

patches clean(a little offset, but no fuzz) on test9, test9-mms, 
test10, test10-mm1

Pat Erley

/*************** patch follows ******************/


--- linux-2.6.0-test9/kernel/sched.c    2003-11-23 02:33:34.000000000 -0500
+++ linux/kernel/sched.c        2003-11-23 02:47:29.730649061 -0500
@@ -720,8 +720,8 @@
         * the sleep_avg of the parent as well.
         */
        if (p->sleep_avg < p->parent->sleep_avg)
-               p->parent->sleep_avg = p->parent->sleep_avg /
-               (EXIT_WEIGHT + 1) * EXIT_WEIGHT + p->sleep_avg /
+               p->parent->sleep_avg = ( p->parent->sleep_avg *
+               EXIT_WEIGHT + p->sleep_avg ) /
                (EXIT_WEIGHT + 1);
 }


-- 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [patch] trivial change in kernel/sched.c in 2.6.0-test9+
  2003-11-26  5:27 [patch] trivial change in kernel/sched.c in 2.6.0-test9+ Pat Erley
@ 2003-11-26  5:55 ` Muli Ben-Yehuda
  2003-11-26  6:07   ` s0be
  0 siblings, 1 reply; 5+ messages in thread
From: Muli Ben-Yehuda @ 2003-11-26  5:55 UTC (permalink / raw)
  To: Pat Erley; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 462 bytes --]

On Wed, Nov 26, 2003 at 12:27:13AM -0500, Pat Erley wrote:

> this ends up saving a few math operations any time a child
> process exits. ( calling sched_exit(task_t * p) )

Yes, but does it have any noticeable effect on performance whatsoever?
premature optimization, root of all evil, etc. 

Cheers, 
Muli 
-- 
Muli Ben-Yehuda
http://www.mulix.org | http://mulix.livejournal.com/

"the nucleus of linux oscillates my world" - gccbot@#offtopic


[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [patch] trivial change in kernel/sched.c in 2.6.0-test9+
  2003-11-26  5:55 ` Muli Ben-Yehuda
@ 2003-11-26  6:07   ` s0be
  2003-11-26 11:02     ` Muli Ben-Yehuda
  0 siblings, 1 reply; 5+ messages in thread
From: s0be @ 2003-11-26  6:07 UTC (permalink / raw)
  To: Muli Ben-Yehuda; +Cc: linux-kernel

> > this ends up saving a few math operations any time a child
> > process exits. ( calling sched_exit(task_t * p) )
> 
> Yes, but does it have any noticeable effect on performance whatsoever?
> premature optimization, root of all evil, etc. 

I'm not on a system that I can take down long enough/crash testing 
right now that I could check this.  And, to be honest, I can't think
of anything other than a fork bomb that would do a good job of testing 
this.  I just remembered helping con with O(3)int schedular hacks and 
he seemed concerned with how many math operations take place in sched.c
due to it being in the core.  

If you can suggest a way to test this, I will test it on my system 
tomorrow.

Pat Erley

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [patch] trivial change in kernel/sched.c in 2.6.0-test9+
  2003-11-26  6:07   ` s0be
@ 2003-11-26 11:02     ` Muli Ben-Yehuda
  2003-11-27 16:46       ` Pat Erley
  0 siblings, 1 reply; 5+ messages in thread
From: Muli Ben-Yehuda @ 2003-11-26 11:02 UTC (permalink / raw)
  To: s0be; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 547 bytes --]

On Wed, Nov 26, 2003 at 01:07:01AM -0500, s0be wrote:

> If you can suggest a way to test this, I will test it on my system 
> tomorrow.

Just off the top of my head, you could try something like a kernel
compilation with and without it. I doubt you'll see any improvement,
though.. there are very few places in the kernel where such
micro-optimizations are worth it, IMVHO. 

Cheers, 
Muli 
-- 
Muli Ben-Yehuda
http://www.mulix.org | http://mulix.livejournal.com/

"the nucleus of linux oscillates my world" - gccbot@#offtopic


[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [patch] trivial change in kernel/sched.c in 2.6.0-test9+
  2003-11-26 11:02     ` Muli Ben-Yehuda
@ 2003-11-27 16:46       ` Pat Erley
  0 siblings, 0 replies; 5+ messages in thread
From: Pat Erley @ 2003-11-27 16:46 UTC (permalink / raw)
  To: Muli Ben-Yehuda; +Cc: linux-kernel

> > If you can suggest a way to test this, I will test it on my system 
> > tomorrow.
> 
> Just off the top of my head, you could try something like a kernel
> compilation with and without it. I doubt you'll see any improvement,
> though.. there are very few places in the kernel where such
> micro-optimizations are worth it, IMVHO. 


Well, I ran about 6 different compiles each on a patched and an unpatched,
and my 'average' savings were about 1 second per 6 minute compile, which is
negligible.  I wonder if doing a make is a good test of this.  Can anyone else
out there come up with another way to check this with a non-cpu hog
application?  I'd like some other cases to test this in.  I mean, when you do 
a 'large' number of compiles that each take a half second, I can see
that saving a division and an addition really wouldn't make a big difference,
but in a situation where you have a large number of short lived threads in
a child process, it may end up saving a bit more.

Pat Erley

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2003-11-27 16:46 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-11-26  5:27 [patch] trivial change in kernel/sched.c in 2.6.0-test9+ Pat Erley
2003-11-26  5:55 ` Muli Ben-Yehuda
2003-11-26  6:07   ` s0be
2003-11-26 11:02     ` Muli Ben-Yehuda
2003-11-27 16:46       ` Pat Erley

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).