From: Mel Gorman <mgorman@techsingularity.net>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Ingo Molnar <mingo@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Vincent Guittot <vincent.guittot@linaro.org>,
Valentin Schneider <valentin.schneider@arm.com>,
Aubrey Li <aubrey.li@linux.intel.com>,
Mel Gorman <mgorman@techsingularity.net>
Subject: [PATCH 3/9] sched/fair: Track efficiency of select_idle_core
Date: Mon, 26 Jul 2021 11:22:41 +0100 [thread overview]
Message-ID: <20210726102247.21437-4-mgorman@techsingularity.net> (raw)
In-Reply-To: <20210726102247.21437-1-mgorman@techsingularity.net>
Add efficiency tracking for select_idle_core.
MMTests uses this to generate additional metrics.
SIS Core Search: The number of times a domain was searched for an idle core
SIS Core Hit: Idle core search success.
SIS Core Miss: Idle core search miss.
SIS Core Search Eff: The percentage of searches that were successful
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
kernel/sched/debug.c | 2 ++
kernel/sched/fair.c | 2 ++
kernel/sched/sched.h | 2 ++
kernel/sched/stats.c | 7 ++++---
4 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 1ec87b7bb6a9..26bdc455e4f4 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -744,6 +744,8 @@ do { \
P(sis_failed);
P(sis_recent_hit);
P(sis_recent_miss);
+ P(sis_core_search);
+ P(sis_core_failed);
}
#undef P
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4d48dc08a49b..4e2979b73cec 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6138,6 +6138,7 @@ static int select_idle_core(struct task_struct *p, int core, struct cpumask *cpu
if (!static_branch_likely(&sched_smt_present))
return __select_idle_cpu(core, p);
+ schedstat_inc(this_rq()->sis_core_search);
for_each_cpu(cpu, cpu_smt_mask(core)) {
schedstat_inc(this_rq()->sis_scanned);
if (!available_idle_cpu(cpu)) {
@@ -6158,6 +6159,7 @@ static int select_idle_core(struct task_struct *p, int core, struct cpumask *cpu
if (idle)
return core;
+ schedstat_inc(this_rq()->sis_core_failed);
cpumask_andnot(cpus, cpus, cpu_smt_mask(core));
return -1;
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 1c04d7a97dbe..e31179e6c6ff 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1080,6 +1080,8 @@ struct rq {
unsigned int sis_failed;
unsigned int sis_recent_hit;
unsigned int sis_recent_miss;
+ unsigned int sis_core_search;
+ unsigned int sis_core_failed;
#endif
#ifdef CONFIG_CPU_IDLE
diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
index 1aa648edb88b..5672f3dc7002 100644
--- a/kernel/sched/stats.c
+++ b/kernel/sched/stats.c
@@ -10,7 +10,7 @@
* Bump this up when changing the output format or the meaning of an existing
* format, so that tools can adapt (or abort)
*/
-#define SCHEDSTAT_VERSION 17
+#define SCHEDSTAT_VERSION 18
static int show_schedstat(struct seq_file *seq, void *v)
{
@@ -30,7 +30,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
/* runqueue-specific stats */
seq_printf(seq,
- "cpu%d %u 0 %u %u %u %u %llu %llu %lu %u %u %u %u %u %u",
+ "cpu%d %u 0 %u %u %u %u %llu %llu %lu %u %u %u %u %u %u %u %u",
cpu, rq->yld_count,
rq->sched_count, rq->sched_goidle,
rq->ttwu_count, rq->ttwu_local,
@@ -38,7 +38,8 @@ static int show_schedstat(struct seq_file *seq, void *v)
rq->rq_sched_info.run_delay, rq->rq_sched_info.pcount,
rq->sis_search, rq->sis_domain_search,
rq->sis_scanned, rq->sis_failed,
- rq->sis_recent_hit, rq->sis_recent_miss);
+ rq->sis_recent_hit, rq->sis_recent_miss,
+ rq->sis_core_search, rq->sis_core_failed);
seq_printf(seq, "\n");
--
2.26.2
next prev parent reply other threads:[~2021-07-26 10:23 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-26 10:22 [RFC PATCH 0/9] Modify and/or delete SIS_PROP Mel Gorman
2021-07-26 10:22 ` [PATCH 1/9] sched/fair: Track efficiency of select_idle_sibling Mel Gorman
2021-07-26 10:22 ` [PATCH 2/9] sched/fair: Track efficiency of task recent_used_cpu Mel Gorman
2021-07-26 10:22 ` Mel Gorman [this message]
2021-07-26 10:22 ` [PATCH 4/9] sched/fair: Use prev instead of new target as recent_used_cpu Mel Gorman
2021-07-26 10:22 ` [PATCH 5/9] sched/fair: Avoid a second scan of target in select_idle_cpu Mel Gorman
2021-07-26 10:22 ` [PATCH 6/9] sched/fair: Make select_idle_cpu() proportional to cores Mel Gorman
2021-07-26 10:22 ` [PATCH 7/9] sched/fair: Enforce proportional scan limits when scanning for an idle core Mel Gorman
2021-08-02 10:52 ` Song Bao Hua (Barry Song)
2021-08-04 10:22 ` Mel Gorman
2021-07-26 10:22 ` [PATCH 8/9] sched/fair: select idle cpu from idle cpumask for task wakeup Mel Gorman
2021-08-02 10:41 ` Song Bao Hua (Barry Song)
2021-08-04 10:26 ` Mel Gorman
2021-08-05 0:23 ` Aubrey Li
2021-09-17 3:44 ` Barry Song
2021-09-17 4:15 ` Barry Song
2021-09-17 9:11 ` Aubrey Li
2021-09-17 13:35 ` Mel Gorman
2021-07-26 10:22 ` [PATCH 9/9] sched/core: Delete SIS_PROP and rely on the idle cpu mask Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210726102247.21437-4-mgorman@techsingularity.net \
--to=mgorman@techsingularity.net \
--cc=aubrey.li@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=valentin.schneider@arm.com \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).