linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Aubrey Li <aubrey.li@linux.intel.com>,
	Barry Song <song.bao.hua@hisilicon.com>,
	Ingo Molnar <mingo@redhat.com>,
	Peter Ziljstra <peterz@infradead.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Valentin Schneider <valentin.schneider@arm.com>,
	Linux-ARM <linux-arm-kernel@lists.infradead.org>,
	Mel Gorman <mgorman@techsingularity.net>
Subject: [PATCH 3/4] sched/fair: Return an idle cpu if one is found after a failed search for an idle core
Date: Mon,  7 Dec 2020 09:15:15 +0000	[thread overview]
Message-ID: <20201207091516.24683-4-mgorman@techsingularity.net> (raw)
In-Reply-To: <20201207091516.24683-1-mgorman@techsingularity.net>

select_idle_core is called when SMT is active and there is likely a free
core available. It may find idle CPUs but this information is simply
discarded and the scan starts over again with select_idle_cpu.

This patch caches information on idle CPUs found during the search for
a core and uses one if no core is found. This is a tradeoff. There may
be a slight impact when utilisation is low and an idle core can be
found quickly. It provides improvements as the number of busy CPUs
approaches 50% of the domain size when SMT is enabled.

With tbench on a 2-socket CascadeLake machine, 80 logical CPUs, HT enabled

                          5.10.0-rc6             5.10.0-rc6
                           schedstat          idlecandidate
Hmean     1        500.06 (   0.00%)      505.67 *   1.12%*
Hmean     2        975.90 (   0.00%)      974.06 *  -0.19%*
Hmean     4       1902.95 (   0.00%)     1904.43 *   0.08%*
Hmean     8       3761.73 (   0.00%)     3721.02 *  -1.08%*
Hmean     16      6713.93 (   0.00%)     6769.17 *   0.82%*
Hmean     32     10435.31 (   0.00%)    10312.58 *  -1.18%*
Hmean     64     12325.51 (   0.00%)    13792.01 *  11.90%*
Hmean     128    21225.21 (   0.00%)    20963.44 *  -1.23%*
Hmean     256    20532.83 (   0.00%)    20335.62 *  -0.96%*
Hmean     320    20334.81 (   0.00%)    20147.25 *  -0.92%*

Note that there is a significant corner case. As the SMT scan may be
terminated early, not all CPUs have been visited and select_idle_cpu()
is still called for a full scan. This case is handled in the next
patch.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 kernel/sched/fair.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 01b38fc17bca..00c3b526a5bd 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6066,6 +6066,7 @@ void __update_idle_core(struct rq *rq)
  */
 static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target)
 {
+	int idle_candidate = -1;
 	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
 	int core, cpu;
 
@@ -6085,6 +6086,11 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
 				idle = false;
 				break;
 			}
+
+			if (idle_candidate == -1 &&
+			    cpumask_test_cpu(cpu, p->cpus_ptr)) {
+				idle_candidate = cpu;
+			}
 		}
 
 		if (idle)
@@ -6098,7 +6104,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
 	 */
 	set_idle_cores(target, 0);
 
-	return -1;
+	return idle_candidate;
 }
 
 /*
-- 
2.26.2


  parent reply	other threads:[~2020-12-07  9:17 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-07  9:15 [RFC PATCH 0/4] Reduce worst-case scanning of runqueues in select_idle_sibling Mel Gorman
2020-12-07  9:15 ` [PATCH 1/4] sched/fair: Remove SIS_AVG_CPU Mel Gorman
2020-12-07 15:05   ` Vincent Guittot
2020-12-08 10:07   ` Dietmar Eggemann
2020-12-08 10:59     ` Mel Gorman
2020-12-08 13:24       ` Vincent Guittot
2020-12-08 13:36         ` Mel Gorman
2020-12-08 13:43           ` Vincent Guittot
2020-12-08 13:53             ` Mel Gorman
2020-12-08 14:47               ` Vincent Guittot
2020-12-08 15:12                 ` Mel Gorman
2020-12-08 15:19                   ` Vincent Guittot
2020-12-07  9:15 ` [PATCH 2/4] sched/fair: Do not replace recent_used_cpu with the new target Mel Gorman
2020-12-08  9:57   ` Dietmar Eggemann
2020-12-08 11:02     ` Mel Gorman
2020-12-07  9:15 ` Mel Gorman [this message]
2020-12-07 15:06   ` [PATCH 3/4] sched/fair: Return an idle cpu if one is found after a failed search for an idle core Vincent Guittot
2020-12-07  9:15 ` [PATCH 4/4] sched/fair: Avoid revisiting CPUs multiple times during select_idle_sibling Mel Gorman
2020-12-07 15:04 ` [RFC PATCH 0/4] Reduce worst-case scanning of runqueues in select_idle_sibling Vincent Guittot
2020-12-07 15:42   ` Mel Gorman
2020-12-08  2:06     ` Li, Aubrey

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201207091516.24683-4-mgorman@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=aubrey.li@linux.intel.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=song.bao.hua@hisilicon.com \
    --cc=valentin.schneider@arm.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).