linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tobias Huschle <huschle@linux.ibm.com>
To: Shrikanth Hegde <sshegde@linux.vnet.ibm.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Ricardo Neri <ricardo.neri@intel.com>,
	"Ravi V . Shankar" <ravi.v.shankar@intel.com>,
	Ben Segall <bsegall@google.com>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Len Brown <len.brown@intel.com>, Mel Gorman <mgorman@suse.de>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Valentin Schneider <vschneid@redhat.com>,
	Ionela Voinescu <ionela.voinescu@arm.com>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
	naveen.n.rao@linux.vnet.ibm.com,
	Yicong Yang <yangyicong@hisilicon.com>,
	Barry Song <v-songbaohua@oppo.com>, Chen Yu <yu.c.chen@intel.com>,
	Hillf Danton <hdanton@sina.com>
Subject: Re: [Patch v3 3/6] sched/fair: Implement prefer sibling imbalance calculation between asymmetric groups
Date: Fri, 14 Jul 2023 16:22:56 +0200	[thread overview]
Message-ID: <b119d88384584e603056cec942c47e14@linux.ibm.com> (raw)
In-Reply-To: <c5a49136-3549-badd-ec8f-3de4e7bb7b7d@linux.vnet.ibm.com>

On 2023-07-14 15:14, Shrikanth Hegde wrote:
> On 7/8/23 4:27 AM, Tim Chen wrote:
>> From: Tim C Chen <tim.c.chen@linux.intel.com>
>> 
>> In the current prefer sibling load balancing code, there is an 
>> implicit
>> assumption that the busiest sched group and local sched group are
>> equivalent, hence the tasks to be moved is simply the difference in
>> number of tasks between the two groups (i.e. imbalance) divided by 
>> two.
>> 
>> However, we may have different number of cores between the cluster 
>> groups,
>> say when we take CPU offline or we have hybrid groups.  In that case,
>> we should balance between the two groups such that #tasks/#cores ratio
>> is the same between the same between both groups.  Hence the imbalance
> 
> nit: type here. the same between is repeated.
> 
>> computed will need to reflect this.
>> 
>> Adjust the sibling imbalance computation to take into account of the
>> above considerations.
>> 
>> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
>> ---
>>  kernel/sched/fair.c | 41 +++++++++++++++++++++++++++++++++++++----
>>  1 file changed, 37 insertions(+), 4 deletions(-)
>> 
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index f636d6c09dc6..f491b94908bf 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -9372,6 +9372,41 @@ static inline bool smt_balance(struct lb_env 
>> *env, struct sg_lb_stats *sgs,
>>  	return false;
>>  }
>> 
>> +static inline long sibling_imbalance(struct lb_env *env,
>> +				    struct sd_lb_stats *sds,
>> +				    struct sg_lb_stats *busiest,
>> +				    struct sg_lb_stats *local)
>> +{
>> +	int ncores_busiest, ncores_local;
>> +	long imbalance;
> 
> can imbalance be unsigned int or unsigned long? as sum_nr_running is
> unsigned int.
> 
>> +
>> +	if (env->idle == CPU_NOT_IDLE || !busiest->sum_nr_running)
>> +		return 0;
>> +
>> +	ncores_busiest = sds->busiest->cores;
>> +	ncores_local = sds->local->cores;
>> +
>> +	if (ncores_busiest == ncores_local) {
>> +		imbalance = busiest->sum_nr_running;
>> +		lsub_positive(&imbalance, local->sum_nr_running);
>> +		return imbalance;
>> +	}
>> +
>> +	/* Balance such that nr_running/ncores ratio are same on both groups 
>> */
>> +	imbalance = ncores_local * busiest->sum_nr_running;
>> +	lsub_positive(&imbalance, ncores_busiest * local->sum_nr_running);
>> +	/* Normalize imbalance and do rounding on normalization */
>> +	imbalance = 2 * imbalance + ncores_local + ncores_busiest;
>> +	imbalance /= ncores_local + ncores_busiest;
>> +
> 
> Could this work for case where number of CPU/cores would differ
> between two sched groups in a sched domain? Such as problem pointed
> by tobias on S390. It would be nice if this patch can work for that 
> case
> as well. Ran numbers for a few cases. It looks to work.
> https://lore.kernel.org/lkml/20230704134024.GV4253@hirez.programming.kicks-ass.net/T/#rb0a7dcd28532cafc24101e1d0aed79e6342e3901
> 


Just stumbled upon this patch series as well. In this version it looks
similar to the prototypes I played around with, but more complete.
So I'm happy that my understanding of the load balancer was kinda 
correct :)

 From a functional perspective this appears to address the issues we saw 
on s390.

> 
> 
>> +	/* Take advantage of resource in an empty sched group */
>> +	if (imbalance == 0 && local->sum_nr_running == 0 &&
>> +	    busiest->sum_nr_running > 1)
>> +		imbalance = 2;
>> +
> 
> I don't see how this case would be true. When there are unequal number
> of cores and local->sum_nr_ruuning
> is 0, and busiest->sum_nr_running is atleast 2, imbalance will be 
> atleast 1.
> 
> 
> Reviewed-by: Shrikanth Hegde <sshegde@linux.vnet.ibm.com>
> 
>> +	return imbalance;
>> +}
>> +
>>  static inline bool
>>  sched_reduced_capacity(struct rq *rq, struct sched_domain *sd)
>>  {
>> @@ -10230,14 +10265,12 @@ static inline void 
>> calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
>>  		}
>> 
>>  		if (busiest->group_weight == 1 || sds->prefer_sibling) {
>> -			unsigned int nr_diff = busiest->sum_nr_running;
>>  			/*
>>  			 * When prefer sibling, evenly spread running tasks on
>>  			 * groups.
>>  			 */
>>  			env->migration_type = migrate_task;
>> -			lsub_positive(&nr_diff, local->sum_nr_running);
>> -			env->imbalance = nr_diff;
>> +			env->imbalance = sibling_imbalance(env, sds, busiest, local);
>>  		} else {
>> 
>>  			/*
>> @@ -10424,7 +10457,7 @@ static struct sched_group 
>> *find_busiest_group(struct lb_env *env)
>>  	 * group's child domain.
>>  	 */
>>  	if (sds.prefer_sibling && local->group_type == group_has_spare &&
>> -	    busiest->sum_nr_running > local->sum_nr_running + 1)
>> +	    sibling_imbalance(env, &sds, busiest, local) > 1)
>>  		goto force_balance;
>> 
>>  	if (busiest->group_type != group_overloaded) {

  reply	other threads:[~2023-07-14 14:23 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-07 22:56 [Patch v3 0/6] Enable Cluster Scheduling for x86 Hybrid CPUs Tim Chen
2023-07-07 22:57 ` [Patch v3 1/6] sched/fair: Determine active load balance for SMT sched groups Tim Chen
2023-07-14 13:06   ` Shrikanth Hegde
2023-07-14 23:05     ` Tim Chen
2023-07-15 18:25       ` Tim Chen
2023-07-16 19:36       ` Shrikanth Hegde
2023-07-17 11:10         ` Peter Zijlstra
2023-07-17 12:18           ` Shrikanth Hegde
2023-07-17 13:37             ` Peter Zijlstra
2023-07-17 14:58               ` [PATCH] sched/fair: Add SMT4 group_smt_balance handling Shrikanth Hegde
2023-07-27  3:11                 ` Tim Chen
2023-07-27 13:32                   ` Tim Chen
2023-08-07  9:36                     ` Shrikanth Hegde
2023-08-21 19:19                       ` Tim Chen
2023-09-05  8:03                         ` Shrikanth Hegde
2023-09-05  9:49                           ` Peter Zijlstra
2023-09-05 18:37                           ` Tim Chen
2023-09-06  9:29                             ` Shrikanth Hegde
2023-09-06 15:42                               ` Tim Chen
2023-09-07  8:58                             ` Shrikanth Hegde
2023-09-07 17:42                               ` Tim Chen
2023-09-12 10:29                                 ` [tip: sched/urgent] sched/fair: Fix " tip-bot2 for Tim Chen
2023-09-13 13:11                                 ` tip-bot2 for Tim Chen
2023-09-05 10:38                         ` [PATCH] sched/fair: Add " Peter Zijlstra
2023-09-05 10:41                     ` Peter Zijlstra
2023-09-05 17:54                       ` Tim Chen
2023-09-06  8:23                         ` Peter Zijlstra
2023-09-06 15:45                           ` Tim Chen
2023-07-18  6:07       ` [Patch v3 1/6] sched/fair: Determine active load balance for SMT sched groups Tobias Huschle
2023-07-18 14:52         ` Shrikanth Hegde
2023-07-19  8:14           ` Tobias Huschle
2023-07-14 14:53   ` Tobias Huschle
2023-07-14 23:29     ` Tim Chen
2023-07-07 22:57 ` [Patch v3 2/6] sched/topology: Record number of cores in sched group Tim Chen
2023-07-10 20:33   ` Valentin Schneider
2023-07-10 22:13     ` Tim Chen
2023-07-12  9:27       ` Valentin Schneider
2023-07-10 22:40   ` Tim Chen
2023-07-11 11:31     ` Peter Zijlstra
2023-07-11 16:32       ` Tim Chen
2023-07-07 22:57 ` [Patch v3 3/6] sched/fair: Implement prefer sibling imbalance calculation between asymmetric groups Tim Chen
2023-07-14 13:14   ` Shrikanth Hegde
2023-07-14 14:22     ` Tobias Huschle [this message]
2023-07-14 23:35       ` Tim Chen
2023-07-14 20:44     ` Tim Chen
2023-07-14 23:23       ` Tim Chen
2023-07-15  0:11     ` Tim Chen
2023-07-07 22:57 ` [Patch v3 4/6] sched/fair: Consider the idle state of the whole core for load balance Tim Chen
2023-07-14 13:02   ` Shrikanth Hegde
2023-07-14 22:16     ` Tim Chen
2023-07-07 22:57 ` [Patch v3 5/6] sched/x86: Add cluster topology to hybrid CPU Tim Chen
2023-07-08 12:31   ` Peter Zijlstra
2023-07-10 16:13     ` Tim Chen
2023-07-07 22:57 ` [Patch v3 6/6] sched/debug: Dump domains' sched group flags Tim Chen
2023-07-10 20:33   ` Valentin Schneider

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b119d88384584e603056cec942c47e14@linux.ibm.com \
    --to=huschle@linux.ibm.com \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=hdanton@sina.com \
    --cc=ionela.voinescu@arm.com \
    --cc=juri.lelli@redhat.com \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=naveen.n.rao@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=rafael.j.wysocki@intel.com \
    --cc=ravi.v.shankar@intel.com \
    --cc=ricardo.neri@intel.com \
    --cc=rostedt@goodmis.org \
    --cc=srikar@linux.vnet.ibm.com \
    --cc=srinivas.pandruvada@linux.intel.com \
    --cc=sshegde@linux.vnet.ibm.com \
    --cc=tim.c.chen@linux.intel.com \
    --cc=v-songbaohua@oppo.com \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    --cc=x86@kernel.org \
    --cc=yangyicong@hisilicon.com \
    --cc=yu.c.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).