All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2 RESEND] sched/topology: Optimize topology_span_sane()
@ 2024-04-09 15:52 Kyle Meyer
  2024-04-09 15:52 ` [PATCH 1/2 RESEND] cpumask: Add for_each_cpu_from() Kyle Meyer
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Kyle Meyer @ 2024-04-09 15:52 UTC (permalink / raw)
  To: linux-kernel, yury.norov, andriy.shevchenko, linux, mingo,
	peterz, juri.lelli, vincent.guittot, dietmar.eggemann, rostedt,
	bsegall, mgorman, bristot, vschneid
  Cc: russ.anderson, dimitri.sivanich, steve.wahl, Kyle Meyer

A soft lockup is being detected in build_sched_domains() on 32 socket
Sapphire Rapids systems with 3840 processors.

topology_span_sane(), called by build_sched_domains(), checks that each
processor's non-NUMA scheduling domains are completely equal or
completely disjoint. If a non-NUMA scheduling domain partially overlaps
another, scheduling groups can break.

This series adds for_each_cpu_from() as a generic cpumask macro to
optimize topology_span_sane() by removing duplicate comparisons. The
total number of comparisons is reduced from N * (N - 1) to
N * (N - 1) / 2 (per non-NUMA scheduling domain level), decreasing the
boot time by approximately 20 seconds and preventing the soft lockup on
the mentioned systems.

RESEND because Valentin Schneider reported that PATCH 2/2 wasn't
delivered to all recipients.

Kyle Meyer (2):
  cpumask: Add for_each_cpu_from()
  sched/topology: Optimize topology_span_sane()

 include/linux/cpumask.h | 10 ++++++++++
 kernel/sched/topology.c |  6 ++----
 2 files changed, 12 insertions(+), 4 deletions(-)

-- 
2.44.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/2 RESEND] cpumask: Add for_each_cpu_from()
  2024-04-09 15:52 [PATCH 0/2 RESEND] sched/topology: Optimize topology_span_sane() Kyle Meyer
@ 2024-04-09 15:52 ` Kyle Meyer
  2024-04-10  7:26   ` Vincent Guittot
  2024-04-09 15:52 ` [PATCH 2/2 RESEND] sched/topology: Optimize topology_span_sane() Kyle Meyer
  2024-04-10 13:27 ` [PATCH 0/2 " Yury Norov
  2 siblings, 1 reply; 9+ messages in thread
From: Kyle Meyer @ 2024-04-09 15:52 UTC (permalink / raw)
  To: linux-kernel, yury.norov, andriy.shevchenko, linux, mingo,
	peterz, juri.lelli, vincent.guittot, dietmar.eggemann, rostedt,
	bsegall, mgorman, bristot, vschneid
  Cc: russ.anderson, dimitri.sivanich, steve.wahl, Kyle Meyer

Add for_each_cpu_from() as a generic cpumask macro.

for_each_cpu_from() is the same as for_each_cpu(), except it starts at
@cpu instead of zero.

Signed-off-by: Kyle Meyer <kyle.meyer@hpe.com>
Acked-by: Yury Norov <yury.norov@gmail.com>
---
 include/linux/cpumask.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 1c29947db848..655211db38ff 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -368,6 +368,16 @@ unsigned int __pure cpumask_next_wrap(int n, const struct cpumask *mask, int sta
 #define for_each_cpu_or(cpu, mask1, mask2)				\
 	for_each_or_bit(cpu, cpumask_bits(mask1), cpumask_bits(mask2), small_cpumask_bits)
 
+/**
+ * for_each_cpu_from - iterate over every cpu present in @mask, starting at @cpu
+ * @cpu: the (optionally unsigned) integer iterator
+ * @mask: the cpumask pointer
+ *
+ * After the loop, cpu is >= nr_cpu_ids.
+ */
+#define for_each_cpu_from(cpu, mask)				\
+	for_each_set_bit_from(cpu, cpumask_bits(mask), small_cpumask_bits)
+
 /**
  * cpumask_any_but - return a "random" in a cpumask, but not this one.
  * @mask: the cpumask to search
-- 
2.44.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/2 RESEND] sched/topology: Optimize topology_span_sane()
  2024-04-09 15:52 [PATCH 0/2 RESEND] sched/topology: Optimize topology_span_sane() Kyle Meyer
  2024-04-09 15:52 ` [PATCH 1/2 RESEND] cpumask: Add for_each_cpu_from() Kyle Meyer
@ 2024-04-09 15:52 ` Kyle Meyer
  2024-04-09 16:25   ` Andy Shevchenko
  2024-04-10  7:34   ` Vincent Guittot
  2024-04-10 13:27 ` [PATCH 0/2 " Yury Norov
  2 siblings, 2 replies; 9+ messages in thread
From: Kyle Meyer @ 2024-04-09 15:52 UTC (permalink / raw)
  To: linux-kernel, yury.norov, andriy.shevchenko, linux, mingo,
	peterz, juri.lelli, vincent.guittot, dietmar.eggemann, rostedt,
	bsegall, mgorman, bristot, vschneid
  Cc: russ.anderson, dimitri.sivanich, steve.wahl, Kyle Meyer

Optimize topology_span_sane() by removing duplicate comparisons.

The total number of comparisons is reduced from N * (N - 1) to
N * (N - 1) / 2 (per non-NUMA scheduling domain level).

Signed-off-by: Kyle Meyer <kyle.meyer@hpe.com>
Reviewed-by: Yury Norov <yury.norov@gmail.com>
---
 kernel/sched/topology.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 99ea5986038c..b6bcafc09969 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2347,7 +2347,7 @@ static struct sched_domain *build_sched_domain(struct sched_domain_topology_leve
 static bool topology_span_sane(struct sched_domain_topology_level *tl,
 			      const struct cpumask *cpu_map, int cpu)
 {
-	int i;
+	int i = cpu + 1;
 
 	/* NUMA levels are allowed to overlap */
 	if (tl->flags & SDTL_OVERLAP)
@@ -2359,9 +2359,7 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl,
 	 * breaking the sched_group lists - i.e. a later get_group() pass
 	 * breaks the linking done for an earlier span.
 	 */
-	for_each_cpu(i, cpu_map) {
-		if (i == cpu)
-			continue;
+	for_each_cpu_from(i, cpu_map) {
 		/*
 		 * We should 'and' all those masks with 'cpu_map' to exactly
 		 * match the topology we're about to build, but that can only
-- 
2.44.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2 RESEND] sched/topology: Optimize topology_span_sane()
  2024-04-09 15:52 ` [PATCH 2/2 RESEND] sched/topology: Optimize topology_span_sane() Kyle Meyer
@ 2024-04-09 16:25   ` Andy Shevchenko
  2024-04-09 19:29     ` Kyle Meyer
  2024-04-10  7:34   ` Vincent Guittot
  1 sibling, 1 reply; 9+ messages in thread
From: Andy Shevchenko @ 2024-04-09 16:25 UTC (permalink / raw)
  To: Kyle Meyer
  Cc: linux-kernel, yury.norov, linux, mingo, peterz, juri.lelli,
	vincent.guittot, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, russ.anderson, dimitri.sivanich, steve.wahl

On Tue, Apr 09, 2024 at 10:52:50AM -0500, Kyle Meyer wrote:
> Optimize topology_span_sane() by removing duplicate comparisons.
> 
> The total number of comparisons is reduced from N * (N - 1) to
> N * (N - 1) / 2 (per non-NUMA scheduling domain level).

...

> -	for_each_cpu(i, cpu_map) {
> -		if (i == cpu)
> -			continue;
> +	for_each_cpu_from(i, cpu_map) {

Hmm... I'm not familiar with the for_each_cpu*(), but from the above
it seems only a single comparison? Or i.o.w. can i ever repeat the value?
And what about i < cpu cases?

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2 RESEND] sched/topology: Optimize topology_span_sane()
  2024-04-09 16:25   ` Andy Shevchenko
@ 2024-04-09 19:29     ` Kyle Meyer
  2024-04-10 13:47       ` Andy Shevchenko
  0 siblings, 1 reply; 9+ messages in thread
From: Kyle Meyer @ 2024-04-09 19:29 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: linux-kernel, yury.norov, linux, mingo, peterz, juri.lelli,
	vincent.guittot, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, russ.anderson, dimitri.sivanich, steve.wahl

On Tue, Apr 09, 2024 at 07:25:06PM +0300, Andy Shevchenko wrote:
> On Tue, Apr 09, 2024 at 10:52:50AM -0500, Kyle Meyer wrote:
> > Optimize topology_span_sane() by removing duplicate comparisons.
> > 
> > The total number of comparisons is reduced from N * (N - 1) to
> > N * (N - 1) / 2 (per non-NUMA scheduling domain level).
> 
> ...
> 
> > -	for_each_cpu(i, cpu_map) {
> > -		if (i == cpu)
> > -			continue;
> > +	for_each_cpu_from(i, cpu_map) {
> 
> Hmm... I'm not familiar with the for_each_cpu*(), but from the above
> it seems only a single comparison? Or i.o.w. can i ever repeat the value?

for_each_cpu() is a macro around for_each_set_bit() which iterates over each set
bit in a bitmap starting at zero.

for_each_cpu_from() is a macro around for_each_set_bit_from() which iterates
over each set bit in a bitmap starting at the specified bit.

The above (topology_span_sane()) currently does a "single comparison" for each
CPU in cpu_map, but it's called for each CPU in cpu_map and for each scheduling
domain level (please see build_sched_domains() in kernel/sched/topology.c).

> And what about i < cpu cases?

Those values have already been passed to topology_span_sane(). This patch uses
for_each_cpu_from() starting at cpu + 1 to prevent those duplicate comparisons.

Thanks,
Kyle Meyer

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2 RESEND] cpumask: Add for_each_cpu_from()
  2024-04-09 15:52 ` [PATCH 1/2 RESEND] cpumask: Add for_each_cpu_from() Kyle Meyer
@ 2024-04-10  7:26   ` Vincent Guittot
  0 siblings, 0 replies; 9+ messages in thread
From: Vincent Guittot @ 2024-04-10  7:26 UTC (permalink / raw)
  To: Kyle Meyer
  Cc: linux-kernel, yury.norov, andriy.shevchenko, linux, mingo,
	peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, russ.anderson, dimitri.sivanich, steve.wahl

On Tue, 9 Apr 2024 at 17:54, Kyle Meyer <kyle.meyer@hpe.com> wrote:
>
> Add for_each_cpu_from() as a generic cpumask macro.
>
> for_each_cpu_from() is the same as for_each_cpu(), except it starts at
> @cpu instead of zero.
>
> Signed-off-by: Kyle Meyer <kyle.meyer@hpe.com>
> Acked-by: Yury Norov <yury.norov@gmail.com>
> ---
>  include/linux/cpumask.h | 10 ++++++++++
>  1 file changed, 10 insertions(+)
>
> diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
> index 1c29947db848..655211db38ff 100644
> --- a/include/linux/cpumask.h
> +++ b/include/linux/cpumask.h
> @@ -368,6 +368,16 @@ unsigned int __pure cpumask_next_wrap(int n, const struct cpumask *mask, int sta
>  #define for_each_cpu_or(cpu, mask1, mask2)                             \
>         for_each_or_bit(cpu, cpumask_bits(mask1), cpumask_bits(mask2), small_cpumask_bits)
>
> +/**
> + * for_each_cpu_from - iterate over every cpu present in @mask, starting at @cpu

So I was confused why you were not using for_each_cpu_wrap while
reading the description which has the same comment :
"
 * for_each_cpu_wrap - iterate over every cpu in a mask, starting at a
specified location
"
Could you clarify that it's not "every cpu present in @mask" but only
those after @cpu ?

> + * @cpu: the (optionally unsigned) integer iterator
> + * @mask: the cpumask pointer
> + *
> + * After the loop, cpu is >= nr_cpu_ids.
> + */
> +#define for_each_cpu_from(cpu, mask)                           \
> +       for_each_set_bit_from(cpu, cpumask_bits(mask), small_cpumask_bits)
> +
>  /**
>   * cpumask_any_but - return a "random" in a cpumask, but not this one.
>   * @mask: the cpumask to search
> --
> 2.44.0
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2 RESEND] sched/topology: Optimize topology_span_sane()
  2024-04-09 15:52 ` [PATCH 2/2 RESEND] sched/topology: Optimize topology_span_sane() Kyle Meyer
  2024-04-09 16:25   ` Andy Shevchenko
@ 2024-04-10  7:34   ` Vincent Guittot
  1 sibling, 0 replies; 9+ messages in thread
From: Vincent Guittot @ 2024-04-10  7:34 UTC (permalink / raw)
  To: Kyle Meyer
  Cc: linux-kernel, yury.norov, andriy.shevchenko, linux, mingo,
	peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, russ.anderson, dimitri.sivanich, steve.wahl

On Tue, 9 Apr 2024 at 17:54, Kyle Meyer <kyle.meyer@hpe.com> wrote:
>
> Optimize topology_span_sane() by removing duplicate comparisons.
>
> The total number of comparisons is reduced from N * (N - 1) to
> N * (N - 1) / 2 (per non-NUMA scheduling domain level).
>
> Signed-off-by: Kyle Meyer <kyle.meyer@hpe.com>
> Reviewed-by: Yury Norov <yury.norov@gmail.com>

Acked-by: Vincent Guittot <vincent.guittot@linaro.org>

> ---
>  kernel/sched/topology.c | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 99ea5986038c..b6bcafc09969 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -2347,7 +2347,7 @@ static struct sched_domain *build_sched_domain(struct sched_domain_topology_leve
>  static bool topology_span_sane(struct sched_domain_topology_level *tl,
>                               const struct cpumask *cpu_map, int cpu)
>  {
> -       int i;
> +       int i = cpu + 1;
>
>         /* NUMA levels are allowed to overlap */
>         if (tl->flags & SDTL_OVERLAP)
> @@ -2359,9 +2359,7 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl,
>          * breaking the sched_group lists - i.e. a later get_group() pass
>          * breaks the linking done for an earlier span.
>          */
> -       for_each_cpu(i, cpu_map) {
> -               if (i == cpu)
> -                       continue;
> +       for_each_cpu_from(i, cpu_map) {
>                 /*
>                  * We should 'and' all those masks with 'cpu_map' to exactly
>                  * match the topology we're about to build, but that can only
> --
> 2.44.0
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/2 RESEND] sched/topology: Optimize topology_span_sane()
  2024-04-09 15:52 [PATCH 0/2 RESEND] sched/topology: Optimize topology_span_sane() Kyle Meyer
  2024-04-09 15:52 ` [PATCH 1/2 RESEND] cpumask: Add for_each_cpu_from() Kyle Meyer
  2024-04-09 15:52 ` [PATCH 2/2 RESEND] sched/topology: Optimize topology_span_sane() Kyle Meyer
@ 2024-04-10 13:27 ` Yury Norov
  2 siblings, 0 replies; 9+ messages in thread
From: Yury Norov @ 2024-04-10 13:27 UTC (permalink / raw)
  To: Kyle Meyer
  Cc: linux-kernel, andriy.shevchenko, linux, mingo, peterz,
	juri.lelli, vincent.guittot, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, russ.anderson, dimitri.sivanich,
	steve.wahl

On Tue, Apr 09, 2024 at 10:52:48AM -0500, Kyle Meyer wrote:
> A soft lockup is being detected in build_sched_domains() on 32 socket
> Sapphire Rapids systems with 3840 processors.
> 
> topology_span_sane(), called by build_sched_domains(), checks that each
> processor's non-NUMA scheduling domains are completely equal or
> completely disjoint. If a non-NUMA scheduling domain partially overlaps
> another, scheduling groups can break.
> 
> This series adds for_each_cpu_from() as a generic cpumask macro to
> optimize topology_span_sane() by removing duplicate comparisons. The
> total number of comparisons is reduced from N * (N - 1) to
> N * (N - 1) / 2 (per non-NUMA scheduling domain level), decreasing the
> boot time by approximately 20 seconds and preventing the soft lockup on
> the mentioned systems.
> 
> RESEND because Valentin Schneider reported that PATCH 2/2 wasn't
> delivered to all recipients.

Hmm... And this one doesn't have #1, at least for me. Can you please
resend again, so we all will have the series as a whole?

Also, can you rephrase this from patch #1 because it confuses people:

> + * for_each_cpu_from - iterate over every cpu present in @mask, starting at @cpu

Maybe: iterate over CPUs present in a @mask greater than a @cpu?
Or similar.

Thanks,
Yury

> Kyle Meyer (2):
>   cpumask: Add for_each_cpu_from()
>   sched/topology: Optimize topology_span_sane()
> 
>  include/linux/cpumask.h | 10 ++++++++++
>  kernel/sched/topology.c |  6 ++----
>  2 files changed, 12 insertions(+), 4 deletions(-)
> 
> -- 
> 2.44.0

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2 RESEND] sched/topology: Optimize topology_span_sane()
  2024-04-09 19:29     ` Kyle Meyer
@ 2024-04-10 13:47       ` Andy Shevchenko
  0 siblings, 0 replies; 9+ messages in thread
From: Andy Shevchenko @ 2024-04-10 13:47 UTC (permalink / raw)
  To: Kyle Meyer
  Cc: linux-kernel, yury.norov, linux, mingo, peterz, juri.lelli,
	vincent.guittot, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, russ.anderson, dimitri.sivanich, steve.wahl

On Tue, Apr 09, 2024 at 02:29:09PM -0500, Kyle Meyer wrote:
> On Tue, Apr 09, 2024 at 07:25:06PM +0300, Andy Shevchenko wrote:
> > On Tue, Apr 09, 2024 at 10:52:50AM -0500, Kyle Meyer wrote:
> > > Optimize topology_span_sane() by removing duplicate comparisons.
> > > 
> > > The total number of comparisons is reduced from N * (N - 1) to
> > > N * (N - 1) / 2 (per non-NUMA scheduling domain level).

...

> > > -	for_each_cpu(i, cpu_map) {
> > > -		if (i == cpu)
> > > -			continue;
> > > +	for_each_cpu_from(i, cpu_map) {
> > 
> > Hmm... I'm not familiar with the for_each_cpu*(), but from the above
> > it seems only a single comparison? Or i.o.w. can i ever repeat the value?
> 
> for_each_cpu() is a macro around for_each_set_bit() which iterates over each set
> bit in a bitmap starting at zero.
> 
> for_each_cpu_from() is a macro around for_each_set_bit_from() which iterates
> over each set bit in a bitmap starting at the specified bit.
> 
> The above (topology_span_sane()) currently does a "single comparison" for each
> CPU in cpu_map, but it's called for each CPU in cpu_map and for each scheduling
> domain level (please see build_sched_domains() in kernel/sched/topology.c).
> 
> > And what about i < cpu cases?
> 
> Those values have already been passed to topology_span_sane(). This patch uses
> for_each_cpu_from() starting at cpu + 1 to prevent those duplicate comparisons.

So, it appears to me that commit message has a room to improve / elaborate based
on what you explained to me above.

Thanks!

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2024-04-10 13:47 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-09 15:52 [PATCH 0/2 RESEND] sched/topology: Optimize topology_span_sane() Kyle Meyer
2024-04-09 15:52 ` [PATCH 1/2 RESEND] cpumask: Add for_each_cpu_from() Kyle Meyer
2024-04-10  7:26   ` Vincent Guittot
2024-04-09 15:52 ` [PATCH 2/2 RESEND] sched/topology: Optimize topology_span_sane() Kyle Meyer
2024-04-09 16:25   ` Andy Shevchenko
2024-04-09 19:29     ` Kyle Meyer
2024-04-10 13:47       ` Andy Shevchenko
2024-04-10  7:34   ` Vincent Guittot
2024-04-10 13:27 ` [PATCH 0/2 " Yury Norov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.