linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* VST and Sched Load Balance
@ 2005-04-07 12:46 Srivatsa Vaddagiri
  2005-04-07 13:07 ` Nick Piggin
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Srivatsa Vaddagiri @ 2005-04-07 12:46 UTC (permalink / raw)
  To: george, nickpiggin, mingo; +Cc: high-res-timers-discourse, linux-kernel

Hi,
	VST patch (http://lwn.net/Articles/118693/) attempts to avoid useless 
regular (local) timer ticks when a CPU is idle.

I think a potential area which VST may need to address is 
scheduler load balance. If idle CPUs stop taking local timer ticks for 
some time, then during that period it could cause the various runqueues to 
go out of balance, since the idle CPUs will no longer pull tasks from 
non-idle CPUs. 

Do we care about this imbalance? Especially considering that most 
implementations will let the idle CPUs sleep only for some max duration
(~900 ms in case of x86).

If we do care about this imbalance, then we could hope that the balance logic
present in try_to_wake_up and sched_exec may avoid this imbalance, but can we 
bank upon these events to restore the runqueue balance?

If we cannot, then I had something in mind on these lines:

1. A non-idle CPU (having nr_running > 1) can wakeup a idle sleeping CPU if it 
   finds that the sleeping CPU has not balanced itself for it's 
   "balance_interval" period.

2. It would be nice to minimize the "cross-domain" wakeups. For ex: we may want 
   to avoid a non-idle CPU in node B sending a wakeup to a idle sleeping CPU in 
   another node A, when this wakeup could have been sent from another non-idle
   CPU in node A itself. 
 
	That is why I have imposed the condition for sending wakeup only when
   a whole sched_group of CPUs are sleeping in a domain. We wake one of them 
   up. The chosen one is one which has not balanced itself for 
   "balance_interval" period.

I did think about avoiding all these and putting some hooks in 
wake_up_new_task, to wakeup the sleeping CPUs. But the problem is 
the woken-up CPU may refuse to pull any tasks and go to sleep again
if it has balanced itself in the domain "recently" (balance_interval).


Comments?

Patch (not fully-tested) against 2.6.11 follows.


---

 linux-2.6.11-vatsa/kernel/sched.c |   52 ++++++++++++++++++++++++++++++++++++++
 1 files changed, 52 insertions(+)

diff -puN kernel/sched.c~vst-sched_load_balance kernel/sched.c
--- linux-2.6.11/kernel/sched.c~vst-sched_load_balance	2005-04-07 17:51:34.000000000 +0530
+++ linux-2.6.11-vatsa/kernel/sched.c	2005-04-07 17:56:18.000000000 +0530
@@ -1774,9 +1774,17 @@ find_busiest_group(struct sched_domain *
 {
 	struct sched_group *busiest = NULL, *this = NULL, *group = sd->groups;
 	unsigned long max_load, avg_load, total_load, this_load, total_pwr;
+#ifdef CONFIG_VST
+	int grp_sleeping;
+	cpumask_t tmpmask, wakemask;
+#endif
 
 	max_load = this_load = total_load = total_pwr = 0;
 
+#ifdef CONFIG_VST
+	cpus_clear(wakemask);
+#endif
+
 	do {
 		unsigned long load;
 		int local_group;
@@ -1787,7 +1795,20 @@ find_busiest_group(struct sched_domain *
 		/* Tally up the load of all CPUs in the group */
 		avg_load = 0;
 
+#ifdef CONFIG_VST
+		grp_sleeping = 0;
+		cpus_and(tmpmask, group->cpumask, nohz_cpu_mask);
+		if (cpus_equal(tmpmask, group->cpumask))
+			grp_sleeping = 1;
+#endif
+
 		for_each_cpu_mask(i, group->cpumask) {
+#ifdef CONFIG_VST
+			int cpu = smp_processor_id();
+			struct sched_domain *sd1;
+			unsigned long interval;
+			int woken = 0;
+#endif
 			/* Bias balancing toward cpus of our domain */
 			if (local_group)
 				load = target_load(i);
@@ -1796,6 +1817,25 @@ find_busiest_group(struct sched_domain *
 
 			nr_cpus++;
 			avg_load += load;
+
+#ifdef CONFIG_VST
+			if (idle != NOT_IDLE || !grp_sleeping ||
+						(grp_sleeping && woken))
+				continue;
+
+			sd1 = sd + (i-cpu);
+			interval = sd1->balance_interval;
+
+			/* scale ms to jiffies */
+			interval = msecs_to_jiffies(interval);
+	                if (unlikely(!interval))
+        	                interval = 1;
+
+			if (jiffies - sd1->last_balance >= interval) {
+				woken = 1;
+				cpu_set(i, wakemask);
+			}
+#endif
 		}
 
 		if (!nr_cpus)
@@ -1819,6 +1859,18 @@ nextgroup:
 		group = group->next;
 	} while (group != sd->groups);
 
+#ifdef CONFIG_VST
+	if (idle == NOT_IDLE && this_load > SCHED_LOAD_SCALE) {
+		int i;
+
+		for_each_cpu_mask(i, wakemask) {
+			spin_lock(&cpu_rq(i)->lock);
+			resched_task(cpu_rq(i)->idle);
+			spin_unlock(&cpu_rq(i)->lock);
+		}
+	}
+#endif
+
 	if (!busiest || this_load >= max_load)
 		goto out_balanced;
 

_





-- 


Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: VST and Sched Load Balance
  2005-04-07 12:46 VST and Sched Load Balance Srivatsa Vaddagiri
@ 2005-04-07 13:07 ` Nick Piggin
  2005-04-07 14:00   ` Srivatsa Vaddagiri
  2005-05-05 14:39   ` Srivatsa Vaddagiri
  2005-04-07 15:10 ` Ingo Molnar
  2005-04-19 16:07 ` Nish Aravamudan
  2 siblings, 2 replies; 13+ messages in thread
From: Nick Piggin @ 2005-04-07 13:07 UTC (permalink / raw)
  To: vatsa; +Cc: george, mingo, high-res-timers-discourse, linux-kernel

Srivatsa Vaddagiri wrote:

> I think a potential area which VST may need to address is 
> scheduler load balance. If idle CPUs stop taking local timer ticks for 
> some time, then during that period it could cause the various runqueues to 
> go out of balance, since the idle CPUs will no longer pull tasks from 
> non-idle CPUs. 
> 

Yep.

> Do we care about this imbalance? Especially considering that most 
> implementations will let the idle CPUs sleep only for some max duration
> (~900 ms in case of x86).
> 

I think we do care, yes. It could be pretty harmful to sleep for
even a few 10s of ms on a regular basis for some workloads. Although
I guess many of those will be covered by try_to_wake_up events...

Not sure in practice, I would imagine it will hurt some multiprocessor
workloads.

> If we do care about this imbalance, then we could hope that the balance logic
> present in try_to_wake_up and sched_exec may avoid this imbalance, but can we 
> bank upon these events to restore the runqueue balance?
> 
> If we cannot, then I had something in mind on these lines:
> 
> 1. A non-idle CPU (having nr_running > 1) can wakeup a idle sleeping CPU if it 
>    finds that the sleeping CPU has not balanced itself for it's 
>    "balance_interval" period.
> 
> 2. It would be nice to minimize the "cross-domain" wakeups. For ex: we may want 
>    to avoid a non-idle CPU in node B sending a wakeup to a idle sleeping CPU in 
>    another node A, when this wakeup could have been sent from another non-idle
>    CPU in node A itself. 
>  

3. This is exactly one of the situations that the balancing backoff code
    was designed for. Can you just schedule interrupts to fire when the
    next balance interval has passed? This may require some adjustments to
    the backoff code in order to get good powersaving, but it would be the
    cleanest approach from the scheduler's point of view.

Nick

-- 
SUSE Labs, Novell Inc.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: VST and Sched Load Balance
  2005-04-07 13:07 ` Nick Piggin
@ 2005-04-07 14:00   ` Srivatsa Vaddagiri
  2005-04-07 14:06     ` Nick Piggin
  2005-05-05 14:39   ` Srivatsa Vaddagiri
  1 sibling, 1 reply; 13+ messages in thread
From: Srivatsa Vaddagiri @ 2005-04-07 14:00 UTC (permalink / raw)
  To: Nick Piggin; +Cc: george, mingo, high-res-timers-discourse, linux-kernel

On Thu, Apr 07, 2005 at 11:07:55PM +1000, Nick Piggin wrote:
> 3. This is exactly one of the situations that the balancing backoff code
>    was designed for. Can you just schedule interrupts to fire when the
>    next balance interval has passed? This may require some adjustments to
>    the backoff code in order to get good powersaving, but it would be the
>    cleanest approach from the scheduler's point of view.


Hmm ..I guess we could restrict the max time a idle CPU will sleep taking
into account its balance interval. But whatever heuristics we follow to 
maximize balance_interval of about-to-sleep idle CPU, don't we still run the 
risk of idle cpu being woken up and going immediately back to sleep (because 
there was no imbalance)?

Moreover we may be greatly reducing the amount of time a CPU is allowed to 
sleep this way ...






-- 


Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: VST and Sched Load Balance
  2005-04-07 14:00   ` Srivatsa Vaddagiri
@ 2005-04-07 14:06     ` Nick Piggin
  0 siblings, 0 replies; 13+ messages in thread
From: Nick Piggin @ 2005-04-07 14:06 UTC (permalink / raw)
  To: vatsa; +Cc: george, mingo, high-res-timers-discourse, linux-kernel

Srivatsa Vaddagiri wrote:

> 
> Hmm ..I guess we could restrict the max time a idle CPU will sleep taking
> into account its balance interval. But whatever heuristics we follow to 
> maximize balance_interval of about-to-sleep idle CPU, don't we still run the 
> risk of idle cpu being woken up and going immediately back to sleep (because 
> there was no imbalance)?
> 

Yep.

> Moreover we may be greatly reducing the amount of time a CPU is allowed to 
> sleep this way ...
> 

Yes. I was assuming you get some kind of fairly rapidly diminishing
efficiency return curve based on your maximum sleep time. If that is
not so, then I agree this wouldn't be the best method.

-- 
SUSE Labs, Novell Inc.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: VST and Sched Load Balance
  2005-04-07 12:46 VST and Sched Load Balance Srivatsa Vaddagiri
  2005-04-07 13:07 ` Nick Piggin
@ 2005-04-07 15:10 ` Ingo Molnar
  2005-04-08  5:34   ` Srivatsa Vaddagiri
  2005-04-19 16:07 ` Nish Aravamudan
  2 siblings, 1 reply; 13+ messages in thread
From: Ingo Molnar @ 2005-04-07 15:10 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: george, nickpiggin, high-res-timers-discourse, linux-kernel


* Srivatsa Vaddagiri <vatsa@in.ibm.com> wrote:

> Hi,
> 	VST patch (http://lwn.net/Articles/118693/) attempts to avoid useless 
> regular (local) timer ticks when a CPU is idle.
> 
> I think a potential area which VST may need to address is scheduler 
> load balance. If idle CPUs stop taking local timer ticks for some 
> time, then during that period it could cause the various runqueues to 
> go out of balance, since the idle CPUs will no longer pull tasks from 
> non-idle CPUs.
> 
> Do we care about this imbalance? Especially considering that most 
> implementations will let the idle CPUs sleep only for some max 
> duration (~900 ms in case of x86).

yeah, we care about this imbalance, it would materially change the 
scheduling logic, which side-effect we dont want. Interaction with VST 
is not a big issue right now because this only matters on SMP boxes 
which is a rare (but not unprecedented) target for embedded platforms.  

One solution would be to add an exponential backoff would be ok (as Nick 
suggested too), not an unconditional 'we wont fire a timer interrupt for 
the next 10 seconds' logic. It still impacts scheduling though.

Another, more effective, less intrusive but also more complex approach 
would be to make a distinction between 'totally idle' and 'partially 
idle or busy' system states. When all CPUs are idle then all timer irqs 
may be stopped and full VST logic applies. When at least one CPU is 
busy, all the other CPUs may still be put to sleep completely and 
immediately, but the busy CPU(s) have to take over a 'watchdog' role, 
and need to run the 'do the idle CPUs need new tasks' balancing 
functions. I.e. the scheduling function of other CPUs is migrated to 
busy CPUs. If there are no busy CPUs then there's no work, so this ought 
to be simple on the VST side. This needs some reorganization on the 
scheduler side but ought to be doable as well.

	Ingo

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: VST and Sched Load Balance
  2005-04-07 15:10 ` Ingo Molnar
@ 2005-04-08  5:34   ` Srivatsa Vaddagiri
  2005-04-08  6:33     ` Nick Piggin
  0 siblings, 1 reply; 13+ messages in thread
From: Srivatsa Vaddagiri @ 2005-04-08  5:34 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: george, nickpiggin, high-res-timers-discourse, linux-kernel

On Thu, Apr 07, 2005 at 05:10:24PM +0200, Ingo Molnar wrote:
> Interaction with VST is not a big issue right now because this only matters 
> on SMP boxes which is a rare (but not unprecedented) target for embedded 
> platforms.  

Well, I don't think VST is targetting just power management in embedded 
platforms. Even (virtualized) servers will benefit from this patch, by
making use of the (virtual) CPU resources more efficiently.

-- 


Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: VST and Sched Load Balance
  2005-04-08  5:34   ` Srivatsa Vaddagiri
@ 2005-04-08  6:33     ` Nick Piggin
  0 siblings, 0 replies; 13+ messages in thread
From: Nick Piggin @ 2005-04-08  6:33 UTC (permalink / raw)
  To: vatsa; +Cc: Ingo Molnar, george, high-res-timers-discourse, linux-kernel

Srivatsa Vaddagiri wrote:
> On Thu, Apr 07, 2005 at 05:10:24PM +0200, Ingo Molnar wrote:
> 
>>Interaction with VST is not a big issue right now because this only matters 
>>on SMP boxes which is a rare (but not unprecedented) target for embedded 
>>platforms.  
> 
> 
> Well, I don't think VST is targetting just power management in embedded 
> platforms. Even (virtualized) servers will benefit from this patch, by
> making use of the (virtual) CPU resources more efficiently.
> 

I still think looking at just using the rebalance backoff would be
a good start.

What would be really nice is to measure the power draw on your favourite
SMP system with your current patches that *don't* schedule ticks to
service rebalancing.

Then measure again with the current rebalance backoff settings (which
will likely be not very good, because some intervals are constrained to
quite small values).

Then we can aim for something like 80-90% of the first (ie perfect)
efficiency rating.

-- 
SUSE Labs, Novell Inc.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: VST and Sched Load Balance
  2005-04-07 12:46 VST and Sched Load Balance Srivatsa Vaddagiri
  2005-04-07 13:07 ` Nick Piggin
  2005-04-07 15:10 ` Ingo Molnar
@ 2005-04-19 16:07 ` Nish Aravamudan
  2005-04-20  9:11   ` Srivatsa Vaddagiri
  2 siblings, 1 reply; 13+ messages in thread
From: Nish Aravamudan @ 2005-04-19 16:07 UTC (permalink / raw)
  To: vatsa; +Cc: george, nickpiggin, mingo, high-res-timers-discourse, linux-kernel

On 4/7/05, Srivatsa Vaddagiri <vatsa@in.ibm.com> wrote:
> Hi,
>         VST patch (http://lwn.net/Articles/118693/) attempts to avoid useless
> regular (local) timer ticks when a CPU is idle.

<snip>

>  linux-2.6.11-vatsa/kernel/sched.c |   52 ++++++++++++++++++++++++++++++++++++++
>  1 files changed, 52 insertions(+)
> 
> diff -puN kernel/sched.c~vst-sched_load_balance kernel/sched.c
> --- linux-2.6.11/kernel/sched.c~vst-sched_load_balance  2005-04-07 17:51:34.000000000 +0530
> +++ linux-2.6.11-vatsa/kernel/sched.c   2005-04-07 17:56:18.000000000 +0530

<snip>

> @@ -1796,6 +1817,25 @@ find_busiest_group(struct sched_domain *
> 
>                         nr_cpus++;
>                         avg_load += load;
> +
> +#ifdef CONFIG_VST
> +                       if (idle != NOT_IDLE || !grp_sleeping ||
> +                                               (grp_sleeping && woken))
> +                               continue;
> +
> +                       sd1 = sd + (i-cpu);
> +                       interval = sd1->balance_interval;
> +
> +                       /* scale ms to jiffies */
> +                       interval = msecs_to_jiffies(interval);
> +                       if (unlikely(!interval))
> +                               interval = 1;
> +
> +                       if (jiffies - sd1->last_balance >= interval) {
> +                               woken = 1;
> +                               cpu_set(i, wakemask);
> +                       }

Sorry for the late reply, but shouldn't this jiffies comparison be
done with time_after() or time_before()?

Thanks,
Nish

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: VST and Sched Load Balance
  2005-04-19 16:07 ` Nish Aravamudan
@ 2005-04-20  9:11   ` Srivatsa Vaddagiri
  0 siblings, 0 replies; 13+ messages in thread
From: Srivatsa Vaddagiri @ 2005-04-20  9:11 UTC (permalink / raw)
  To: Nish Aravamudan
  Cc: george, nickpiggin, mingo, high-res-timers-discourse, linux-kernel

On Tue, Apr 19, 2005 at 09:07:49AM -0700, Nish Aravamudan wrote:
> > +                       if (jiffies - sd1->last_balance >= interval) {


> Sorry for the late reply, but shouldn't this jiffies comparison be
> done with time_after() or time_before()?

I think it is not needed. The check should be able to handle overflow case also.

This probably assumes that you don't sleep longer than (2e32 - 1) jiffies
(which is ~1193 hrs). Current VST implementation let us sleep way less than that
limit (~896 ms) since it uses 32-bit number for sampling TSC. When it is
upgraded to use 64-bit number, we may have to ensure that this limit (1193 hrs)
is not exceeded.

-- 


Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: VST and Sched Load Balance
  2005-04-07 13:07 ` Nick Piggin
  2005-04-07 14:00   ` Srivatsa Vaddagiri
@ 2005-05-05 14:39   ` Srivatsa Vaddagiri
  2005-05-05 14:52     ` Nick Piggin
  1 sibling, 1 reply; 13+ messages in thread
From: Srivatsa Vaddagiri @ 2005-05-05 14:39 UTC (permalink / raw)
  To: Nick Piggin, mingo; +Cc: george, high-res-timers-discourse, linux-kernel

On Thu, Apr 07, 2005 at 11:07:55PM +1000, Nick Piggin wrote:
> Srivatsa Vaddagiri wrote:
> 
> >I think a potential area which VST may need to address is 
> >scheduler load balance. If idle CPUs stop taking local timer ticks for 
> >some time, then during that period it could cause the various runqueues to 
> >go out of balance, since the idle CPUs will no longer pull tasks from 
> >non-idle CPUs. 
> >
> 
> Yep.
> 
> >Do we care about this imbalance? Especially considering that most 
> >implementations will let the idle CPUs sleep only for some max duration
> >(~900 ms in case of x86).
> >
> 
> I think we do care, yes. It could be pretty harmful to sleep for
> even a few 10s of ms on a regular basis for some workloads. Although
> I guess many of those will be covered by try_to_wake_up events...
> 
> Not sure in practice, I would imagine it will hurt some multiprocessor
> workloads.

I am looking at the recent changes in load balance and I see that load
balance on fork has been introduced (SD_BALANCE_FORK). I think this changes
the whole scenario.

Considering the fact that there was already balance on wake_up and the 
fact that the scheduler checks for imbalance before running the idle task
(load_balance_newidle), I don't know if sleeping idle CPUs can cause a 
load imbalance (fork/wakeup happening on other CPUs will probably push
tasks to it and wake it up anyway? exits can change the balance, but probably
is not relevant here?)

Except for a small fact: if the CPU sleeps w/o taking rebalance_ticks,
its cpu_load[] can become incorrect over a period.

I noticed that load_balance_newidle uses newidle_idx to gauge the current cpu's 
load. As a result, it can see non-zero load for the idle cpu. Because of this 
it can decide to not pull tasks.  

The rationale here (of using non-zero load): is it to try and let the
cpu become idle? Somehow, this doesn't make sense, because in the very next 
rebalance_tick (assuming that the idle cpu does not sleep), it will start using 
the idle_idx, which will cause the load to show up as zero and can cause the 
idle CPU to pull some tasks.

Have I missed something here?

Anyway, if the idle cpu were to sleep instead, the next rebalance_tick will 
not happen and it will not pull the tasks to restore load balance.

If my above understanding is correct, I see two potential solutions for this:


	A. Have load_balance_newidle use zero load for current cpu while
	  checking for busiest cpu.
	B. Or, if we want to retain load_balance_newidle the way it is, have 
	  the idle thread call back scheduler to zero the load and retry
	  load balance, _when_ it decides that it wants to sleep (there
	  are conditions under which a idle cpu may not want to sleep. for ex:
	  the next timer is only a tick, 1ms, away).

In either case, if the load balance still fails to pull any tasks, then it means
there is really no imbalance. Tasks that will be added into the system later 
(fork/wake_up) will be balanced across the CPUs because of the load-balance 
code that runs during those events.

A possible patch for B follows below:


---

 linux-2.6.12-rc3-mm2-vatsa/include/linux/sched.h |    1 
 linux-2.6.12-rc3-mm2-vatsa/kernel/sched.c        |   38 +++++++++++++++++++++++
 2 files changed, 39 insertions(+)

diff -puN kernel/sched.c~sched-nohz kernel/sched.c
--- linux-2.6.12-rc3-mm2/kernel/sched.c~sched-nohz	2005-05-04 18:23:30.000000000 +0530
+++ linux-2.6.12-rc3-mm2-vatsa/kernel/sched.c	2005-05-05 11:37:12.000000000 +0530
@@ -2214,6 +2214,44 @@ static inline void idle_balance(int this
 	}
 }
 
+#ifdef CONFIG_NO_IDLE_HZ
+/*
+ * Try hard to pull tasks. Called by idle task before it sleeps shutting off
+ * local timer ticks.  This clears the various load counters and tries to pull
+ * tasks. If it cannot, then it means that there is really no imbalance at this
+ * point. Any imbalance that arises in future (because of fork/wake_up) will be
+ * handled by the load balance that happens during those events.
+ *
+ * Returns 1 if tasks were pulled over, 0 otherwise.
+ */
+int idle_balance_retry(void)
+{
+	int j, moved = 0, this_cpu = smp_processor_id();
+	struct sched_domain *sd;
+	runqueue_t *this_rq = this_rq();
+	unsigned long flags;
+
+	spin_lock_irqsave(&this_rq->lock, flags);
+
+	for (j = 0; j < 3; j++)
+		this_rq->cpu_load[j] = 0;
+
+	for_each_domain(this_cpu, sd) {
+		if (sd->flags & SD_BALANCE_NEWIDLE) {
+			if (load_balance_newidle(this_cpu, this_rq, sd)) {
+				/* We've pulled tasks over so stop searching */
+				moved = 1;
+				break;
+			}
+		}
+	}
+
+	spin_unlock_irqrestore(&this_rq->lock, flags);
+
+	return moved;
+}
+#endif
+
 /*
  * active_load_balance is run by migration threads. It pushes running tasks
  * off the busiest CPU onto idle CPUs. It requires at least 1 task to be
diff -puN include/linux/sched.h~sched-nohz include/linux/sched.h
--- linux-2.6.12-rc3-mm2/include/linux/sched.h~sched-nohz	2005-05-04 18:23:30.000000000 +0530
+++ linux-2.6.12-rc3-mm2-vatsa/include/linux/sched.h	2005-05-04 18:23:37.000000000 +0530
@@ -897,6 +897,7 @@ extern int task_curr(const task_t *p);
 extern int idle_cpu(int cpu);
 extern int sched_setscheduler(struct task_struct *, int, struct sched_param *);
 extern task_t *idle_task(int cpu);
+extern int idle_balance_retry(void);
 
 void yield(void);
 

_













-- 


Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: VST and Sched Load Balance
  2005-05-05 14:39   ` Srivatsa Vaddagiri
@ 2005-05-05 14:52     ` Nick Piggin
  2005-05-05 16:15       ` Srivatsa Vaddagiri
  0 siblings, 1 reply; 13+ messages in thread
From: Nick Piggin @ 2005-05-05 14:52 UTC (permalink / raw)
  To: vatsa; +Cc: mingo, george, high-res-timers-discourse, linux-kernel

Srivatsa Vaddagiri wrote:
> On Thu, Apr 07, 2005 at 11:07:55PM +1000, Nick Piggin wrote:

>>I think we do care, yes. It could be pretty harmful to sleep for
>>even a few 10s of ms on a regular basis for some workloads. Although
>>I guess many of those will be covered by try_to_wake_up events...
>>
>>Not sure in practice, I would imagine it will hurt some multiprocessor
>>workloads.
> 
> 
> I am looking at the recent changes in load balance and I see that load
> balance on fork has been introduced (SD_BALANCE_FORK). I think this changes
> the whole scenario.
> 
> Considering the fact that there was already balance on wake_up and the 
> fact that the scheduler checks for imbalance before running the idle task
> (load_balance_newidle), I don't know if sleeping idle CPUs can cause a 
> load imbalance (fork/wakeup happening on other CPUs will probably push
> tasks to it and wake it up anyway? exits can change the balance, but probably
> is not relevant here?)
> 

Well, there are a lot of ifs and buts. Some domains won't implement
fork balancing, others won't do newidle balancing, wake balancing,
wake to idle, etc etc.

I think my idea of allowing max_interval to be extended to a
sufficiently large value if one CPU goes idle, and shutting off all
CPU's rebalancing completely if no tasks are running for some time
should cater to both hypervisor images and power saving concerns.
While not being very intrusive to normal operation of the scheduler.

> Except for a small fact: if the CPU sleeps w/o taking rebalance_ticks,
> its cpu_load[] can become incorrect over a period.
> 
> I noticed that load_balance_newidle uses newidle_idx to gauge the current cpu's 
> load. As a result, it can see non-zero load for the idle cpu. Because of this 
> it can decide to not pull tasks.  
> 
> The rationale here (of using non-zero load): is it to try and let the
> cpu become idle? Somehow, this doesn't make sense, because in the very next 
> rebalance_tick (assuming that the idle cpu does not sleep), it will start using 
> the idle_idx, which will cause the load to show up as zero and can cause the 
> idle CPU to pull some tasks.
> 
> Have I missed something here?
> 

No. I think tou are right in that we'll want to make sure cpu_load is
zero before cutting the timer tick.

> Anyway, if the idle cpu were to sleep instead, the next rebalance_tick will 
> not happen and it will not pull the tasks to restore load balance.
> 
> If my above understanding is correct, I see two potential solutions for this:
> 
> 
> 	A. Have load_balance_newidle use zero load for current cpu while
> 	  checking for busiest cpu.
> 	B. Or, if we want to retain load_balance_newidle the way it is, have 
> 	  the idle thread call back scheduler to zero the load and retry
> 	  load balance, _when_ it decides that it wants to sleep (there
> 	  are conditions under which a idle cpu may not want to sleep. for ex:
> 	  the next timer is only a tick, 1ms, away).
> 
> In either case, if the load balance still fails to pull any tasks, then it means
> there is really no imbalance. Tasks that will be added into the system later 
> (fork/wake_up) will be balanced across the CPUs because of the load-balance 
> code that runs during those events.
> 
> A possible patch for B follows below:
> 

Yeah something like that should do it.

-- 
SUSE Labs, Novell Inc.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: VST and Sched Load Balance
  2005-05-05 14:52     ` Nick Piggin
@ 2005-05-05 16:15       ` Srivatsa Vaddagiri
  0 siblings, 0 replies; 13+ messages in thread
From: Srivatsa Vaddagiri @ 2005-05-05 16:15 UTC (permalink / raw)
  To: Nick Piggin; +Cc: mingo, george, high-res-timers-discourse, linux-kernel

On Fri, May 06, 2005 at 12:52:48AM +1000, Nick Piggin wrote:
> Well, there are a lot of ifs and buts. Some domains won't implement
> fork balancing, others won't do newidle balancing, wake balancing,
> wake to idle, etc etc.

Good point. I had somehow assumed that these are true for all domains.
My bad ..

> 
> I think my idea of allowing max_interval to be extended to a
> sufficiently large value if one CPU goes idle, and shutting off all
> CPU's rebalancing completely if no tasks are running for some time
> should cater to both hypervisor images and power saving concerns.

Maybe we should check with virtual machine folks on this. Will
check with UML and S390 folks tomorrow.

> >A possible patch for B follows below:
> >
> 
> Yeah something like that should do it.

Ok ..thanks. Will include this patch in the final 
load-balance-fix-for-no-hz-idle-cpus's patch!

-- 


Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: VST and Sched Load Balance
@ 2005-04-07 16:25 Srivatsa Vaddagiri
  0 siblings, 0 replies; 13+ messages in thread
From: Srivatsa Vaddagiri @ 2005-04-07 16:25 UTC (permalink / raw)
  To: mingo; +Cc: george, nickpiggin, high-res-timers-discourse, linux-kernel, vatsa

[Sorry about sending my response from a different account. Can't seem
to access my ibm account right now]

* Ingo wrote:

> Another, more effective, less intrusive but also more complex approach
> would be to make a distinction between 'totally idle' and 'partially
> idle or busy' system states. When all CPUs are idle then all timer irqs
> may be stopped and full VST logic applies. When at least one CPU is
> busy, all the other CPUs may still be put to sleep completely and
> immediately, but the busy CPU(s) have to take over a 'watchdog' role,
> and need to run the 'do the idle CPUs need new tasks' balancing
> functions. I.e. the scheduling function of other CPUs is migrated to
> busy CPUs. If there are no busy CPUs then there's no work, so this ought
> to be simple on the VST side. This needs some reorganization on the
> scheduler side but ought to be doable as well.


Hmm ..I think this is the approach that I have followed in my patch, where
busy CPUs act as watchdogs and wakeup sleeping CPUs at an appropriate
time. The appropriate time is currently based on the busy CPU's load
being greater than 1 and the sleeping CPU not having balanced for its
minimum balance_interval.

Do you have any other suggestions on how the watchdog function should
be implemented?

- vatsa

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2005-05-05 16:14 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2005-04-07 12:46 VST and Sched Load Balance Srivatsa Vaddagiri
2005-04-07 13:07 ` Nick Piggin
2005-04-07 14:00   ` Srivatsa Vaddagiri
2005-04-07 14:06     ` Nick Piggin
2005-05-05 14:39   ` Srivatsa Vaddagiri
2005-05-05 14:52     ` Nick Piggin
2005-05-05 16:15       ` Srivatsa Vaddagiri
2005-04-07 15:10 ` Ingo Molnar
2005-04-08  5:34   ` Srivatsa Vaddagiri
2005-04-08  6:33     ` Nick Piggin
2005-04-19 16:07 ` Nish Aravamudan
2005-04-20  9:11   ` Srivatsa Vaddagiri
2005-04-07 16:25 Srivatsa Vaddagiri

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).