linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@kernel.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Valentin Schneider <valentin.schneider@arm.com>,
	Aubrey Li <aubrey.li@linux.intel.com>,
	Barry Song <song.bao.hua@hisilicon.com>,
	Mike Galbraith <efault@gmx.de>,
	Gautham Shenoy <gautham.shenoy@amd.com>,
	K Prateek Nayak <kprateek.nayak@amd.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 1/2] sched/fair: Improve consistency of allowed NUMA balance calculations
Date: Mon, 14 Feb 2022 15:56:42 +0530	[thread overview]
Message-ID: <20220214102642.GH618915@linux.vnet.ibm.com> (raw)
In-Reply-To: <20220208094334.16379-2-mgorman@techsingularity.net>

* Mel Gorman <mgorman@techsingularity.net> [2022-02-08 09:43:33]:

> There are inconsistencies when determining if a NUMA imbalance is allowed
> that should be corrected.
> 
> o allow_numa_imbalance changes types and is not always examining
>   the destination group so both the type should be corrected as
>   well as the naming.
> o find_idlest_group uses the sched_domain's weight instead of the
>   group weight which is different to find_busiest_group
> o find_busiest_group uses the source group instead of the destination
>   which is different to task_numa_find_cpu
> o Both find_idlest_group and find_busiest_group should account
>   for the number of running tasks if a move was allowed to be
>   consistent with task_numa_find_cpu
> 
> Fixes: 7d2b5dd0bcc4 ("sched/numa: Allow a floating imbalance between NUMA nodes")
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> ---
>  kernel/sched/fair.c | 18 ++++++++++--------
>  1 file changed, 10 insertions(+), 8 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 095b0aa378df..4592ccf82c34 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -9003,9 +9003,10 @@ static bool update_pick_idlest(struct sched_group *idlest,
>   * This is an approximation as the number of running tasks may not be
>   * related to the number of busy CPUs due to sched_setaffinity.
>   */
> -static inline bool allow_numa_imbalance(int dst_running, int dst_weight)
> +static inline bool
> +allow_numa_imbalance(unsigned int running, unsigned int weight)
>  {
> -	return (dst_running < (dst_weight >> 2));
> +	return (running < (weight >> 2));
>  }
> 
>  /*
> @@ -9139,12 +9140,13 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
>  				return idlest;
>  #endif
>  			/*
> -			 * Otherwise, keep the task on this node to stay close
> -			 * its wakeup source and improve locality. If there is
> -			 * a real need of migration, periodic load balance will
> -			 * take care of it.
> +			 * Otherwise, keep the task close to the wakeup source
> +			 * and improve locality if the number of running tasks
> +			 * would remain below threshold where an imbalance is
> +			 * allowed. If there is a real need of migration,
> +			 * periodic load balance will take care of it.
>  			 */
> -			if (allow_numa_imbalance(local_sgs.sum_nr_running, sd->span_weight))
> +			if (allow_numa_imbalance(local_sgs.sum_nr_running + 1, local_sgs.group_weight))
>  				return NULL;
>  		}
> 
> @@ -9350,7 +9352,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
>  		/* Consider allowing a small imbalance between NUMA groups */
>  		if (env->sd->flags & SD_NUMA) {
>  			env->imbalance = adjust_numa_imbalance(env->imbalance,
> -				busiest->sum_nr_running, busiest->group_weight);
> +				local->sum_nr_running + 1, local->group_weight);
>  		}
> 
>  		return;
> -- 
> 2.31.1
> 

Looks good to me.

Reviewed-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>

-- 
Thanks and Regards
Srikar Dronamraju

  parent reply	other threads:[~2022-02-14 11:00 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-08  9:43 [PATCH v6 0/2] Adjust NUMA imbalance for multiple LLCs Mel Gorman
2022-02-08  9:43 ` [PATCH 1/2] sched/fair: Improve consistency of allowed NUMA balance calculations Mel Gorman
2022-02-08 15:06   ` Gautham R. Shenoy
2022-02-14  9:48   ` Vincent Guittot
2022-02-14 10:26   ` Srikar Dronamraju [this message]
2022-02-14 10:30   ` [tip: sched/core] " tip-bot2 for Mel Gorman
2022-02-08  9:43 ` [PATCH 2/2] sched/fair: Adjust the allowed NUMA imbalance when SD_NUMA spans multiple LLCs Mel Gorman
2022-02-08 16:19   ` Gautham R. Shenoy
2022-02-09  5:10   ` K Prateek Nayak
2022-02-09 10:33     ` Mel Gorman
2022-02-11 19:02       ` Jirka Hladky
2022-02-14 10:27   ` Srikar Dronamraju
2022-02-14 10:30   ` [tip: sched/core] " tip-bot2 for Mel Gorman
2022-02-14 11:03   ` [PATCH 2/2] " Vincent Guittot
2022-02-09  9:38 ` [PATCH v6 0/2] Adjust NUMA imbalance for " Peter Zijlstra
  -- strict thread matches above, loose matches on Subject: below --
2022-02-03 14:46 [PATCH v5 " Mel Gorman
2022-02-03 14:46 ` [PATCH 1/2] sched/fair: Improve consistency of allowed NUMA balance calculations Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220214102642.GH618915@linux.vnet.ibm.com \
    --to=srikar@linux.vnet.ibm.com \
    --cc=aubrey.li@linux.intel.com \
    --cc=efault@gmx.de \
    --cc=gautham.shenoy@amd.com \
    --cc=kprateek.nayak@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=song.bao.hua@hisilicon.com \
    --cc=valentin.schneider@arm.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).