linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@redhat.com>
To: Rick Lindsley <ricklind@us.ibm.com>
Cc: linux-kernel@vger.kernel.org
Subject: Re: [patch] HT scheduler, sched-2.5.68-A9
Date: Tue, 22 Apr 2003 07:27:29 -0400 (EDT)	[thread overview]
Message-ID: <Pine.LNX.4.44.0304220712580.28721-100000@devserv.devel.redhat.com> (raw)
In-Reply-To: <200304221110.h3MBAE712394@owlet.beaverton.ibm.com>


On Tue, 22 Apr 2003, Rick Lindsley wrote:

>     yes. This 'un-sharing' of contexts happens unconditionally, whenever
>     we notice the situation. (ie. whenever a CPU goes completely idle
>     and notices an overloaded physical CPU.) On the HT system i have i
>     have measure this to be a beneficial move even for the most trivial
>     things like infinite loop-counting.
> 
> I have access to a 4-proc HT so I can try it there too. Did you test
> with micro-benchmarks like the loop-counting or did you use something
> bigger?

it is very obviously the case that two tasks running on different physical
CPUs outperform two tasks running on the same physical CPU. I have
attempted to find the best-case sharing - and even that one underperforms.  
I measured cache-heavy gcc compilation as well, there there was almost no
speedup due to HT.

but it's not really a problem - the logical CPUs are probably quite cheap
on the physical side, so any improvement in overload situations is a help.  
But once the overload situation stops, we should avoid the false sharing
and balance over those tasks to one physical CPU each.

in any case, please feel free to do measurements in this direction
nevertheless, more numbers never hurt.

>     the more per-logical-CPU cache a given SMT implementation has,
>     the less beneficial this move becomes - in that case the system
>     should rather be set up as a NUMA topology and scheduled via the
>     NUMA scheduler.
> 
> 	whew. So why are we perverting the migration thread to push
> 	rather than pull? If active_load_balance() finds a imbalance,
> 	why must we use such indirection?  Why decrement nr_running?
> 	Couldn't we put together a migration_req_t for the target queue's
> 	migration thread?
> 
>     i'm not sure what you mean by perverting the migration thread to
>     push rather to pull, as migration threads always push - it's not
>     different in this case either.
> 
> My bad -- I read the comments around migration_thread(), and they could
> probably be improved. [...]

i'll fix the comments up. And the migration concept originally was a pull
thing, but now it has arrived to a clean push model.

> [...] When I looked at the code, yes, it's more of a push.  The
> migration thread process occupies the processor so that you can be sure
> the process-of-interest is not running and can be more easily
> manipulated.

yes, that's the core idea.

>     Also, active balancing is non-queued by nature. Is there a big
>     difference?
> 
> I'm not sure active balancing really is independent of cpus_allowed.

of course we never balance to a CPU not allowed, but what i meant is that
the forced migration triggered by a ->cpus_allowed change [ie. the removal
of the current CPU from the process' allowed CPU mask] is conceptually
different from the forced migration of a task between two allowed CPUs.

> Yes, all the searches are done without that restriction in place, but
> then we ultimately call load_balance(), which *will* care.
> load_balance() may end up not moving what we wanted (or anything at
> all.)

load_balance() will most definitly balance the task in question, in the
active-balance case. The only reason why it didnt succeed earlier is
because load_balance() is a passive "pull" concept, so it is not able to
break up the false sharing between those two tasks that are both actively
running. [it correctly sees a 2:0 imbalance between the runqueues and
tries to balance them, but both tasks are running.] This is why the "push"  
concept of active-balancing has to kick in.

in fact in the active-balance case the imbalance is 3:0, because the
migration thread is running too, so we decrease nr_running artifically,
before calling load_balance(). Otherwise a 3:1 setup could cause a false
migration. [the real load situation is 2:1 in that case.]

we dont keep the runqueues locked while these migration requests are
pending, so there's a small window for the balancing to get behind - but
that risk is present with any statistical approach anyway.

	Ingo


  reply	other threads:[~2003-04-22 11:15 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-04-21 19:13 [patch] HT scheduler, sched-2.5.68-A9 Ingo Molnar
2003-04-22  4:01 ` Dave Jones
2003-04-22  7:08   ` Ingo Molnar
2003-04-22 10:18 ` Rick Lindsley
2003-04-22 10:34   ` Ingo Molnar
2003-04-22 11:10     ` Rick Lindsley
2003-04-22 11:27       ` Ingo Molnar [this message]
2003-04-22 22:16     ` several messages Bill Davidsen
2003-04-22 23:38       ` Rick Lindsley
2003-04-23  9:17         ` Ingo Molnar
2003-04-22 23:48 ` [patch] HT scheduler, sched-2.5.68-A9 Martin J. Bligh
2003-04-23  9:12   ` Ingo Molnar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.44.0304220712580.28721-100000@devserv.devel.redhat.com \
    --to=mingo@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ricklind@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).