From: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
To: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nathan Lynch <nathanl@linux.ibm.com>,
Gautham R Shenoy <ego@linux.vnet.ibm.com>,
Oliver OHalloran <oliveroh@au1.ibm.com>,
Michael Neuling <mikey@linux.ibm.com>,
Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
Michael Ellerman <michaele@au1.ibm.com>,
Anton Blanchard <anton@au1.ibm.com>,
linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
Nick Piggin <npiggin@au1.ibm.com>
Subject: [PATCH 06/11] powerpc/smp: Generalize 2nd sched domain
Date: Tue, 14 Jul 2020 10:06:19 +0530 [thread overview]
Message-ID: <20200714043624.5648-7-srikar@linux.vnet.ibm.com> (raw)
In-Reply-To: <20200714043624.5648-1-srikar@linux.vnet.ibm.com>
Currently "CACHE" domain happens to be the 2nd sched domain as per
powerpc_topology. This domain will collapse if cpumask of l2-cache is
same as SMT domain. However we could generalize this domain such that it
could mean either be a "CACHE" domain or a "BIGCORE" domain.
While setting up the "CACHE" domain, check if shared_cache is already
set.
Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Cc: Michael Ellerman <michaele@au1.ibm.com>
Cc: Nick Piggin <npiggin@au1.ibm.com>
Cc: Oliver OHalloran <oliveroh@au1.ibm.com>
Cc: Nathan Lynch <nathanl@linux.ibm.com>
Cc: Michael Neuling <mikey@linux.ibm.com>
Cc: Anton Blanchard <anton@au1.ibm.com>
Cc: Gautham R Shenoy <ego@linux.vnet.ibm.com>
Cc: Vaidyanathan Srinivasan <svaidy@linux.ibm.com>
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
---
arch/powerpc/kernel/smp.c | 48 +++++++++++++++++++++++++++------------
1 file changed, 34 insertions(+), 14 deletions(-)
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 875f57e41355..f8faf75135af 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -85,6 +85,14 @@ EXPORT_PER_CPU_SYMBOL(cpu_l2_cache_map);
EXPORT_PER_CPU_SYMBOL(cpu_core_map);
EXPORT_SYMBOL_GPL(has_big_cores);
+enum {
+#ifdef CONFIG_SCHED_SMT
+ smt_idx,
+#endif
+ bigcore_idx,
+ die_idx,
+};
+
#define MAX_THREAD_LIST_SIZE 8
#define THREAD_GROUP_SHARE_L1 1
struct thread_groups {
@@ -851,13 +859,7 @@ static int powerpc_shared_cache_flags(void)
*/
static const struct cpumask *shared_cache_mask(int cpu)
{
- if (shared_caches)
- return cpu_l2_cache_mask(cpu);
-
- if (has_big_cores)
- return cpu_smallcore_mask(cpu);
-
- return cpu_smt_mask(cpu);
+ return per_cpu(cpu_l2_cache_map, cpu);
}
#ifdef CONFIG_SCHED_SMT
@@ -867,11 +869,16 @@ static const struct cpumask *smallcore_smt_mask(int cpu)
}
#endif
+static const struct cpumask *cpu_bigcore_mask(int cpu)
+{
+ return cpu_core_mask(cpu);
+}
+
static struct sched_domain_topology_level powerpc_topology[] = {
#ifdef CONFIG_SCHED_SMT
{ cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) },
#endif
- { shared_cache_mask, powerpc_shared_cache_flags, SD_INIT_NAME(CACHE) },
+ { cpu_bigcore_mask, SD_INIT_NAME(BIGCORE) },
{ cpu_cpu_mask, SD_INIT_NAME(DIE) },
{ NULL, },
};
@@ -895,7 +902,7 @@ static int init_big_cores(void)
#ifdef CONFIG_SCHED_SMT
pr_info("Big cores detected. Using small core scheduling\n");
- powerpc_topology[0].mask = smallcore_smt_mask;
+ powerpc_topology[smt_idx].mask = smallcore_smt_mask;
#endif
return 0;
@@ -1319,7 +1326,6 @@ static void add_cpu_to_masks(int cpu)
void start_secondary(void *unused)
{
unsigned int cpu = smp_processor_id();
- struct cpumask *(*sibling_mask)(int) = cpu_sibling_mask;
mmgrab(&init_mm);
current->active_mm = &init_mm;
@@ -1345,14 +1351,20 @@ void start_secondary(void *unused)
/* Update topology CPU masks */
add_cpu_to_masks(cpu);
- if (has_big_cores)
- sibling_mask = cpu_smallcore_mask;
/*
* Check for any shared caches. Note that this must be done on a
* per-core basis because one core in the pair might be disabled.
*/
- if (!cpumask_equal(cpu_l2_cache_mask(cpu), sibling_mask(cpu)))
- shared_caches = true;
+ if (!shared_caches) {
+ struct cpumask *(*sibling_mask)(int) = cpu_sibling_mask;
+ struct cpumask *mask = cpu_l2_cache_mask(cpu);
+
+ if (has_big_cores)
+ sibling_mask = cpu_smallcore_mask;
+
+ if (cpumask_weight(mask) > cpumask_weight(sibling_mask(cpu)))
+ shared_caches = true;
+ }
set_numa_node(numa_cpu_lookup_table[cpu]);
set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu]));
@@ -1390,6 +1402,14 @@ void __init smp_cpus_done(unsigned int max_cpus)
smp_ops->bringup_done();
dump_numa_cpu_topology();
+ if (shared_caches) {
+ pr_info("Using shared cache scheduler topology\n");
+ powerpc_topology[bigcore_idx].mask = shared_cache_mask;
+#ifdef CONFIG_SCHED_DEBUG
+ powerpc_topology[bigcore_idx].name = "CACHE";
+#endif
+ powerpc_topology[bigcore_idx].sd_flags = powerpc_shared_cache_flags;
+ }
set_sched_topology(powerpc_topology);
}
--
2.17.1
next prev parent reply other threads:[~2020-07-14 4:47 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-14 4:36 [PATCH 00/11] Support for grouping cores Srikar Dronamraju
2020-07-14 4:36 ` [PATCH 01/11] powerpc/smp: Cache node for reuse Srikar Dronamraju
2020-07-17 4:51 ` Gautham R Shenoy
2020-07-14 4:36 ` [PATCH 02/11] powerpc/smp: Merge Power9 topology with Power topology Srikar Dronamraju
2020-07-17 5:44 ` Gautham R Shenoy
2020-07-20 8:10 ` Srikar Dronamraju
2020-07-14 4:36 ` [PATCH 03/11] powerpc/smp: Move powerpc_topology above Srikar Dronamraju
2020-07-17 5:45 ` Gautham R Shenoy
2020-07-14 4:36 ` [PATCH 04/11] powerpc/smp: Enable small core scheduling sooner Srikar Dronamraju
2020-07-17 5:48 ` Gautham R Shenoy
2020-07-20 7:20 ` Srikar Dronamraju
2020-07-20 7:47 ` Jordan Niethe
2020-07-20 8:52 ` Srikar Dronamraju
2020-07-14 4:36 ` [PATCH 05/11] powerpc/smp: Dont assume l2-cache to be superset of sibling Srikar Dronamraju
2020-07-14 5:40 ` Oliver O'Halloran
2020-07-14 6:30 ` Srikar Dronamraju
2020-07-17 6:00 ` Gautham R Shenoy
2020-07-20 6:45 ` Srikar Dronamraju
2020-07-20 8:58 ` Gautham R Shenoy
2020-07-14 4:36 ` Srikar Dronamraju [this message]
2020-07-17 6:37 ` [PATCH 06/11] powerpc/smp: Generalize 2nd sched domain Gautham R Shenoy
2020-07-20 6:19 ` Srikar Dronamraju
2020-07-20 9:07 ` Gautham R Shenoy
2020-07-14 4:36 ` [PATCH 07/11] Powerpc/numa: Detect support for coregroup Srikar Dronamraju
2020-07-17 8:08 ` Gautham R Shenoy
2020-07-20 13:56 ` Michael Ellerman
2020-07-21 2:57 ` Srikar Dronamraju
2020-07-14 4:36 ` [PATCH 08/11] powerpc/smp: Allocate cpumask only after searching thread group Srikar Dronamraju
2020-07-17 8:08 ` Gautham R Shenoy
2020-07-14 4:36 ` [PATCH 09/11] Powerpc/smp: Create coregroup domain Srikar Dronamraju
2020-07-17 8:19 ` Gautham R Shenoy
2020-07-17 8:23 ` Gautham R Shenoy
2020-07-20 6:02 ` Srikar Dronamraju
2020-07-14 4:36 ` [PATCH 10/11] powerpc/smp: Implement cpu_to_coregroup_id Srikar Dronamraju
2020-07-17 8:26 ` Gautham R Shenoy
2020-07-20 5:48 ` Srikar Dronamraju
2020-07-20 9:10 ` Gautham R Shenoy
2020-07-20 10:26 ` Srikar Dronamraju
2020-07-14 4:36 ` [PATCH 11/11] powerpc/smp: Provide an ability to disable coregroup Srikar Dronamraju
2020-07-17 8:28 ` Gautham R Shenoy
2020-07-20 13:57 ` Michael Ellerman
2020-07-14 5:06 ` [PATCH 00/11] Support for grouping cores Srikar Dronamraju
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200714043624.5648-7-srikar@linux.vnet.ibm.com \
--to=srikar@linux.vnet.ibm.com \
--cc=anton@au1.ibm.com \
--cc=ego@linux.vnet.ibm.com \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=michaele@au1.ibm.com \
--cc=mikey@linux.ibm.com \
--cc=mpe@ellerman.id.au \
--cc=nathanl@linux.ibm.com \
--cc=npiggin@au1.ibm.com \
--cc=oliveroh@au1.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).