All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yicong Yang <yangyicong@huawei.com>
To: Darren Hart <darren@os.amperecomputing.com>
Cc: <yangyicong@hisilicon.com>, Sudeep Holla <sudeep.holla@arm.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	"D . Scott Phillips" <scott@os.amperecomputing.com>,
	Ilkka Koskinen <ilkka@os.amperecomputing.com>,
	<stable@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>,
	Linux Arm <linux-arm-kernel@lists.infradead.org>,
	Ionela Voinescu <ionela.voinescu@arm.com>,
	Barry Song <21cnbao@gmail.com>,
	Jonathan Cameron <jonathan.cameron@huawei.com>
Subject: Re: [PATCH v5] topology: make core_mask include at least cluster_siblings
Date: Fri, 16 Sep 2022 15:59:34 +0800	[thread overview]
Message-ID: <bcd61ebd-d751-57a3-690b-b76c7bd230c5@huawei.com> (raw)
In-Reply-To: <YyNnMmtoOrdexLoy@fedora>

On 2022/9/16 1:56, Darren Hart wrote:
> On Thu, Sep 15, 2022 at 08:01:18PM +0800, Yicong Yang wrote:
>> Hi Darren,
>>
> 
> Hi Yicong,
> 
> ...
> 
>>> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
>>> index 1d6636ebaac5..5497c5ab7318 100644
>>> --- a/drivers/base/arch_topology.c
>>> +++ b/drivers/base/arch_topology.c
>>> @@ -667,6 +667,15 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
>>>  			core_mask = &cpu_topology[cpu].llc_sibling;
>>>  	}
>>>  
>>> +	/*
>>> +	 * For systems with no shared cpu-side LLC but with clusters defined,
>>> +	 * extend core_mask to cluster_siblings. The sched domain builder will
>>> +	 * then remove MC as redundant with CLS if SCHED_CLUSTER is enabled.
>>> +	 */
>>> +	if (IS_ENABLED(CONFIG_SCHED_CLUSTER) &&
>>> +	    cpumask_subset(core_mask, &cpu_topology[cpu].cluster_sibling))
>>> +		core_mask = &cpu_topology[cpu].cluster_sibling;
>>> +
>>>  	return core_mask;
>>>  }
>>>  
>>
>> Is this patch still necessary for Ampere after Ionela's patch [1], which
>> will limit the cluster's span within coregroup's span.
> 
> Yes, see:
> https://lore.kernel.org/lkml/YshYAyEWhE4z%2FKpB@fedora/
> 
> Both patches work together to accomplish the desired sched domains for the
> Ampere Altra family.
> 

Thanks for the link. From my understanding, on the Altra machine we'll get
the following results:

with your patch alone:
Scheduler will get a weight of 2 for both CLS and MC level and finally the
MC domain will be squashed. The lowest domain will be CLS.

with both your patch and Ionela's:
CLS will have a weight of 1 and MC will have a weight of 2. CLS won't be
built and the lowest domain will be MC.

with Ionela's patch alone:
Both CLS and MC will have a weight of 1, which is incorrect.

So your patch is still necessary for Amphere Altra. Then we need to limit
MC span to DIE/NODE span, according to the scheduler's definition for
topology level, for the issue below. Maybe something like this:

diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index 46cbe4471e78..8ebaba576836 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -713,6 +713,9 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
            cpumask_subset(core_mask, &cpu_topology[cpu].cluster_sibling))
                core_mask = &cpu_topology[cpu].cluster_sibling;

+       if (cpumask_subset(cpu_cpu_mask(cpu), core_mask))
+               core_mask = cpu_cpu_mask(cpu);
+
        return core_mask;
 }

>>
>> I found an issue that the NUMA domains are not built on qemu with:
>>
>> qemu-system-aarch64 \
>>         -kernel ${Image} \
>>         -smp 8 \
>>         -cpu cortex-a72 \
>>         -m 32G \
>>         -object memory-backend-ram,id=node0,size=8G \
>>         -object memory-backend-ram,id=node1,size=8G \
>>         -object memory-backend-ram,id=node2,size=8G \
>>         -object memory-backend-ram,id=node3,size=8G \
>>         -numa node,memdev=node0,cpus=0-1,nodeid=0 \
>>         -numa node,memdev=node1,cpus=2-3,nodeid=1 \
>>         -numa node,memdev=node2,cpus=4-5,nodeid=2 \
>>         -numa node,memdev=node3,cpus=6-7,nodeid=3 \
>>         -numa dist,src=0,dst=1,val=12 \
>>         -numa dist,src=0,dst=2,val=20 \
>>         -numa dist,src=0,dst=3,val=22 \
>>         -numa dist,src=1,dst=2,val=22 \
>>         -numa dist,src=1,dst=3,val=24 \
>>         -numa dist,src=2,dst=3,val=12 \
>>         -machine virt,iommu=smmuv3 \
>>         -net none \
>>         -initrd ${Rootfs} \
>>         -nographic \
>>         -bios QEMU_EFI.fd \
>>         -append "rdinit=/init console=ttyAMA0 earlycon=pl011,0x9000000 sched_verbose loglevel=8"
>>
>> I can see the schedule domain build stops at MC level since we reach all the
>> cpus in the system:
>>
>> [    2.141316] CPU0 attaching sched-domain(s):
>> [    2.142558]  domain-0: span=0-7 level=MC
>> [    2.145364]   groups: 0:{ span=0 cap=964 }, 1:{ span=1 cap=914 }, 2:{ span=2 cap=921 }, 3:{ span=3 cap=964 }, 4:{ span=4 cap=925 }, 5:{ span=5 cap=964 }, 6:{ span=6 cap=967 }, 7:{ span=7 cap=967 }
>> [    2.158357] CPU1 attaching sched-domain(s):
>> [    2.158964]  domain-0: span=0-7 level=MC
>> [...]
>>
>> Without this the NUMA domains are built correctly:
>>
> > Without which? My patch, Ionela's patch, or both?
> 

Revert your patch only will have below result, sorry for the ambiguous. Before reverting,
for CPU 0, MC should span 0-1 but with your patch it's extended to 0-7 and the scheduler
domain build will stop at MC level because it has reached all the CPUs.

>> [    2.008885] CPU0 attaching sched-domain(s):
>> [    2.009764]  domain-0: span=0-1 level=MC
>> [    2.012654]   groups: 0:{ span=0 cap=962 }, 1:{ span=1 cap=925 }
>> [    2.016532]   domain-1: span=0-3 level=NUMA
>> [    2.017444]    groups: 0:{ span=0-1 cap=1887 }, 2:{ span=2-3 cap=1871 }
>> [    2.019354]    domain-2: span=0-5 level=NUMA
> 
> I'm not following this topology - what in the description above should result in
> a domain with span=0-5?
> 

It emulates a 3-hop NUMA machine and the NUMA domains will be built according to the
NUMA distances:

node   0   1   2   3
  0:  10  12  20  22
  1:  12  10  22  24
  2:  20  22  10  12
  3:  22  24  12  10

So for CPU 0 the NUMA domains will look like:
NUMA domain 0 for local nodes (squashed to MC domain), CPU 0-1
NUMA domain 1 for nodes within distance 12, CPU 0-3
NUMA domain 2 for nodes within distance 20, CPU 0-5
NUMA domain 3 for all the nodes, CPU 0-7

Thanks.

> 
>> [    2.019983]     groups: 0:{ span=0-3 cap=3758 }, 4:{ span=4-5 cap=1935 }
>> [    2.021527]     domain-3: span=0-7 level=NUMA
>> [    2.022516]      groups: 0:{ span=0-5 mask=0-1 cap=5693 }, 6:{ span=4-7 mask=6-7 cap=3978 }
>> [...]
>>
>> Hope to see your comments since I have no Ampere machine and I don't know
>> how to emulate its topology on qemu.
>>
>> [1] bfcc4397435d ("arch_topology: Limit span of cpu_clustergroup_mask()")
>>
>> Thanks,
>> Yicong
> 
> Thanks,
> 

WARNING: multiple messages have this Message-ID (diff)
From: Yicong Yang <yangyicong@huawei.com>
To: Darren Hart <darren@os.amperecomputing.com>
Cc: <yangyicong@hisilicon.com>, Sudeep Holla <sudeep.holla@arm.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	"D . Scott Phillips" <scott@os.amperecomputing.com>,
	Ilkka Koskinen <ilkka@os.amperecomputing.com>,
	<stable@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>,
	Linux Arm <linux-arm-kernel@lists.infradead.org>,
	Ionela Voinescu <ionela.voinescu@arm.com>,
	Barry Song <21cnbao@gmail.com>,
	Jonathan Cameron <jonathan.cameron@huawei.com>
Subject: Re: [PATCH v5] topology: make core_mask include at least cluster_siblings
Date: Fri, 16 Sep 2022 15:59:34 +0800	[thread overview]
Message-ID: <bcd61ebd-d751-57a3-690b-b76c7bd230c5@huawei.com> (raw)
In-Reply-To: <YyNnMmtoOrdexLoy@fedora>

On 2022/9/16 1:56, Darren Hart wrote:
> On Thu, Sep 15, 2022 at 08:01:18PM +0800, Yicong Yang wrote:
>> Hi Darren,
>>
> 
> Hi Yicong,
> 
> ...
> 
>>> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
>>> index 1d6636ebaac5..5497c5ab7318 100644
>>> --- a/drivers/base/arch_topology.c
>>> +++ b/drivers/base/arch_topology.c
>>> @@ -667,6 +667,15 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
>>>  			core_mask = &cpu_topology[cpu].llc_sibling;
>>>  	}
>>>  
>>> +	/*
>>> +	 * For systems with no shared cpu-side LLC but with clusters defined,
>>> +	 * extend core_mask to cluster_siblings. The sched domain builder will
>>> +	 * then remove MC as redundant with CLS if SCHED_CLUSTER is enabled.
>>> +	 */
>>> +	if (IS_ENABLED(CONFIG_SCHED_CLUSTER) &&
>>> +	    cpumask_subset(core_mask, &cpu_topology[cpu].cluster_sibling))
>>> +		core_mask = &cpu_topology[cpu].cluster_sibling;
>>> +
>>>  	return core_mask;
>>>  }
>>>  
>>
>> Is this patch still necessary for Ampere after Ionela's patch [1], which
>> will limit the cluster's span within coregroup's span.
> 
> Yes, see:
> https://lore.kernel.org/lkml/YshYAyEWhE4z%2FKpB@fedora/
> 
> Both patches work together to accomplish the desired sched domains for the
> Ampere Altra family.
> 

Thanks for the link. From my understanding, on the Altra machine we'll get
the following results:

with your patch alone:
Scheduler will get a weight of 2 for both CLS and MC level and finally the
MC domain will be squashed. The lowest domain will be CLS.

with both your patch and Ionela's:
CLS will have a weight of 1 and MC will have a weight of 2. CLS won't be
built and the lowest domain will be MC.

with Ionela's patch alone:
Both CLS and MC will have a weight of 1, which is incorrect.

So your patch is still necessary for Amphere Altra. Then we need to limit
MC span to DIE/NODE span, according to the scheduler's definition for
topology level, for the issue below. Maybe something like this:

diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index 46cbe4471e78..8ebaba576836 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -713,6 +713,9 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
            cpumask_subset(core_mask, &cpu_topology[cpu].cluster_sibling))
                core_mask = &cpu_topology[cpu].cluster_sibling;

+       if (cpumask_subset(cpu_cpu_mask(cpu), core_mask))
+               core_mask = cpu_cpu_mask(cpu);
+
        return core_mask;
 }

>>
>> I found an issue that the NUMA domains are not built on qemu with:
>>
>> qemu-system-aarch64 \
>>         -kernel ${Image} \
>>         -smp 8 \
>>         -cpu cortex-a72 \
>>         -m 32G \
>>         -object memory-backend-ram,id=node0,size=8G \
>>         -object memory-backend-ram,id=node1,size=8G \
>>         -object memory-backend-ram,id=node2,size=8G \
>>         -object memory-backend-ram,id=node3,size=8G \
>>         -numa node,memdev=node0,cpus=0-1,nodeid=0 \
>>         -numa node,memdev=node1,cpus=2-3,nodeid=1 \
>>         -numa node,memdev=node2,cpus=4-5,nodeid=2 \
>>         -numa node,memdev=node3,cpus=6-7,nodeid=3 \
>>         -numa dist,src=0,dst=1,val=12 \
>>         -numa dist,src=0,dst=2,val=20 \
>>         -numa dist,src=0,dst=3,val=22 \
>>         -numa dist,src=1,dst=2,val=22 \
>>         -numa dist,src=1,dst=3,val=24 \
>>         -numa dist,src=2,dst=3,val=12 \
>>         -machine virt,iommu=smmuv3 \
>>         -net none \
>>         -initrd ${Rootfs} \
>>         -nographic \
>>         -bios QEMU_EFI.fd \
>>         -append "rdinit=/init console=ttyAMA0 earlycon=pl011,0x9000000 sched_verbose loglevel=8"
>>
>> I can see the schedule domain build stops at MC level since we reach all the
>> cpus in the system:
>>
>> [    2.141316] CPU0 attaching sched-domain(s):
>> [    2.142558]  domain-0: span=0-7 level=MC
>> [    2.145364]   groups: 0:{ span=0 cap=964 }, 1:{ span=1 cap=914 }, 2:{ span=2 cap=921 }, 3:{ span=3 cap=964 }, 4:{ span=4 cap=925 }, 5:{ span=5 cap=964 }, 6:{ span=6 cap=967 }, 7:{ span=7 cap=967 }
>> [    2.158357] CPU1 attaching sched-domain(s):
>> [    2.158964]  domain-0: span=0-7 level=MC
>> [...]
>>
>> Without this the NUMA domains are built correctly:
>>
> > Without which? My patch, Ionela's patch, or both?
> 

Revert your patch only will have below result, sorry for the ambiguous. Before reverting,
for CPU 0, MC should span 0-1 but with your patch it's extended to 0-7 and the scheduler
domain build will stop at MC level because it has reached all the CPUs.

>> [    2.008885] CPU0 attaching sched-domain(s):
>> [    2.009764]  domain-0: span=0-1 level=MC
>> [    2.012654]   groups: 0:{ span=0 cap=962 }, 1:{ span=1 cap=925 }
>> [    2.016532]   domain-1: span=0-3 level=NUMA
>> [    2.017444]    groups: 0:{ span=0-1 cap=1887 }, 2:{ span=2-3 cap=1871 }
>> [    2.019354]    domain-2: span=0-5 level=NUMA
> 
> I'm not following this topology - what in the description above should result in
> a domain with span=0-5?
> 

It emulates a 3-hop NUMA machine and the NUMA domains will be built according to the
NUMA distances:

node   0   1   2   3
  0:  10  12  20  22
  1:  12  10  22  24
  2:  20  22  10  12
  3:  22  24  12  10

So for CPU 0 the NUMA domains will look like:
NUMA domain 0 for local nodes (squashed to MC domain), CPU 0-1
NUMA domain 1 for nodes within distance 12, CPU 0-3
NUMA domain 2 for nodes within distance 20, CPU 0-5
NUMA domain 3 for all the nodes, CPU 0-7

Thanks.

> 
>> [    2.019983]     groups: 0:{ span=0-3 cap=3758 }, 4:{ span=4-5 cap=1935 }
>> [    2.021527]     domain-3: span=0-7 level=NUMA
>> [    2.022516]      groups: 0:{ span=0-5 mask=0-1 cap=5693 }, 6:{ span=4-7 mask=6-7 cap=3978 }
>> [...]
>>
>> Hope to see your comments since I have no Ampere machine and I don't know
>> how to emulate its topology on qemu.
>>
>> [1] bfcc4397435d ("arch_topology: Limit span of cpu_clustergroup_mask()")
>>
>> Thanks,
>> Yicong
> 
> Thanks,
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2022-09-16  7:59 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-11 20:53 [PATCH v5] topology: make core_mask include at least cluster_siblings Darren Hart
2022-04-11 20:53 ` Darren Hart
2022-09-15 12:01 ` Yicong Yang
2022-09-15 12:01   ` Yicong Yang
2022-09-15 17:56   ` Darren Hart
2022-09-15 17:56     ` Darren Hart
2022-09-16  7:59     ` Yicong Yang [this message]
2022-09-16  7:59       ` Yicong Yang
2022-09-16 16:14       ` Ionela Voinescu
2022-09-16 16:14         ` Ionela Voinescu
2022-09-16 17:46         ` Darren Hart
2022-09-16 17:46           ` Darren Hart
2022-09-16 17:41       ` Darren Hart
2022-09-16 17:41         ` Darren Hart
2022-09-19 13:22         ` Yicong Yang
2022-09-19 13:22           ` Yicong Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bcd61ebd-d751-57a3-690b-b76c7bd230c5@huawei.com \
    --to=yangyicong@huawei.com \
    --cc=21cnbao@gmail.com \
    --cc=catalin.marinas@arm.com \
    --cc=darren@os.amperecomputing.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=ilkka@os.amperecomputing.com \
    --cc=ionela.voinescu@arm.com \
    --cc=jonathan.cameron@huawei.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=rafael@kernel.org \
    --cc=scott@os.amperecomputing.com \
    --cc=stable@vger.kernel.org \
    --cc=sudeep.holla@arm.com \
    --cc=vincent.guittot@linaro.org \
    --cc=will@kernel.org \
    --cc=yangyicong@hisilicon.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.