All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sudeep Holla <sudeep.holla@arm.com>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: linux-kernel@vger.kernel.org, conor.dooley@microchip.com,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Ionela Voinescu <ionela.voinescu@arm.com>,
	Pierre Gondois <pierre.gondois@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-riscv@lists.infradead.org
Subject: Re: [PATCH -next] arch_topology: Fix cache attributes detection in the CPU hotplug path
Date: Wed, 13 Jul 2022 15:18:57 +0100	[thread overview]
Message-ID: <20220713141857.p3ruapm6b4in574j@bogus> (raw)
In-Reply-To: <Ys7QzJ14brtz23XY@kroah.com>

On Wed, Jul 13, 2022 at 04:03:56PM +0200, Greg Kroah-Hartman wrote:
> On Wed, Jul 13, 2022 at 02:33:44PM +0100, Sudeep Holla wrote:
> > init_cpu_topology() is called only once at the boot and all the cache
> > attributes are detected early for all the possible CPUs. However when
> > the CPUs are hotplugged out, the cacheinfo gets removed. While the
> > attributes are added back when the CPUs are hotplugged back in as part
> > of CPU hotplug state machine, it ends up called quite late after the
> > update_siblings_masks() are called in the secondary_start_kernel()
> > resulting in wrong llc_sibling_masks.
> > 
> > Move the call to detect_cache_attributes() inside update_siblings_masks()
> > to ensure the cacheinfo is updated before the LLC sibling masks are
> > updated. This will fix the incorrect LLC sibling masks generated when
> > the CPUs are hotplugged out and hotplugged back in again.
> > 
> > Reported-by: Ionela Voinescu <ionela.voinescu@arm.com>
> > Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
> > ---
> >  drivers/base/arch_topology.c | 16 ++++++----------
> >  1 file changed, 6 insertions(+), 10 deletions(-)
> > 
> > Hi Conor,
> > 
> > Ionela reported an issue with the CPU hotplug and as a fix I need to
> > move the call to detect_cache_attributes() which I had thought to keep
> > it there from first but for no reason had moved it to init_cpu_topology().
> > 
> > Wonder if this fixes the -ENOMEM on RISC-V as this one is called on the
> > cpu in the secondary CPUs init path while init_cpu_topology executed
> > detect_cache_attributes() for all possible CPUs much earlier. I think
> > this might help as the percpu memory might be initialised in this case.
> > 
> > Anyways give this a try, also test the CPU hotplug and check if nothing
> > is broken on RISC-V. We noticed this bug only on one platform while
> > 
> > Regards,
> > Sudeep
> > 
> > diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> > index 441e14ac33a4..0424b59b695e 100644
> > --- a/drivers/base/arch_topology.c
> > +++ b/drivers/base/arch_topology.c
> > @@ -732,7 +732,11 @@ const struct cpumask *cpu_clustergroup_mask(int cpu)
> >  void update_siblings_masks(unsigned int cpuid)
> >  {
> >  	struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
> > -	int cpu;
> > +	int cpu, ret;
> > +
> > +	ret = detect_cache_attributes(cpuid);
> > +	if (ret)
> > +		pr_info("Early cacheinfo failed, ret = %d\n", ret);
> 
> No erroring out?
> 

No, this is optional as not all platforms have cacheinfo in the DT and also
the scheduler must work even without the cache information. It may not produce
optimal performance but it must work.

Also we have seen on one RISC-V platform with probably low percpu allocation,
the early detection fails, but it works just fine later device_initcall().
That was the main reason for adding error log, but the idea is to continue
building the information for the scheduler domains even if the LLC information
can't be obtained. In case of failure, we assume all CPUs have only private
caches and no shared LLC.

Hope that makes sense. Let me know if you prefer to drop the error log or
anything else. I just added as we found cases of -ENOMEM on RISC-V and we
want to highlight that.

-- 
Regards,
Sudeep

WARNING: multiple messages have this Message-ID (diff)
From: Sudeep Holla <sudeep.holla@arm.com>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: linux-kernel@vger.kernel.org, conor.dooley@microchip.com,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Ionela Voinescu <ionela.voinescu@arm.com>,
	Pierre Gondois <pierre.gondois@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-riscv@lists.infradead.org
Subject: Re: [PATCH -next] arch_topology: Fix cache attributes detection in the CPU hotplug path
Date: Wed, 13 Jul 2022 15:18:57 +0100	[thread overview]
Message-ID: <20220713141857.p3ruapm6b4in574j@bogus> (raw)
In-Reply-To: <Ys7QzJ14brtz23XY@kroah.com>

On Wed, Jul 13, 2022 at 04:03:56PM +0200, Greg Kroah-Hartman wrote:
> On Wed, Jul 13, 2022 at 02:33:44PM +0100, Sudeep Holla wrote:
> > init_cpu_topology() is called only once at the boot and all the cache
> > attributes are detected early for all the possible CPUs. However when
> > the CPUs are hotplugged out, the cacheinfo gets removed. While the
> > attributes are added back when the CPUs are hotplugged back in as part
> > of CPU hotplug state machine, it ends up called quite late after the
> > update_siblings_masks() are called in the secondary_start_kernel()
> > resulting in wrong llc_sibling_masks.
> > 
> > Move the call to detect_cache_attributes() inside update_siblings_masks()
> > to ensure the cacheinfo is updated before the LLC sibling masks are
> > updated. This will fix the incorrect LLC sibling masks generated when
> > the CPUs are hotplugged out and hotplugged back in again.
> > 
> > Reported-by: Ionela Voinescu <ionela.voinescu@arm.com>
> > Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
> > ---
> >  drivers/base/arch_topology.c | 16 ++++++----------
> >  1 file changed, 6 insertions(+), 10 deletions(-)
> > 
> > Hi Conor,
> > 
> > Ionela reported an issue with the CPU hotplug and as a fix I need to
> > move the call to detect_cache_attributes() which I had thought to keep
> > it there from first but for no reason had moved it to init_cpu_topology().
> > 
> > Wonder if this fixes the -ENOMEM on RISC-V as this one is called on the
> > cpu in the secondary CPUs init path while init_cpu_topology executed
> > detect_cache_attributes() for all possible CPUs much earlier. I think
> > this might help as the percpu memory might be initialised in this case.
> > 
> > Anyways give this a try, also test the CPU hotplug and check if nothing
> > is broken on RISC-V. We noticed this bug only on one platform while
> > 
> > Regards,
> > Sudeep
> > 
> > diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> > index 441e14ac33a4..0424b59b695e 100644
> > --- a/drivers/base/arch_topology.c
> > +++ b/drivers/base/arch_topology.c
> > @@ -732,7 +732,11 @@ const struct cpumask *cpu_clustergroup_mask(int cpu)
> >  void update_siblings_masks(unsigned int cpuid)
> >  {
> >  	struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
> > -	int cpu;
> > +	int cpu, ret;
> > +
> > +	ret = detect_cache_attributes(cpuid);
> > +	if (ret)
> > +		pr_info("Early cacheinfo failed, ret = %d\n", ret);
> 
> No erroring out?
> 

No, this is optional as not all platforms have cacheinfo in the DT and also
the scheduler must work even without the cache information. It may not produce
optimal performance but it must work.

Also we have seen on one RISC-V platform with probably low percpu allocation,
the early detection fails, but it works just fine later device_initcall().
That was the main reason for adding error log, but the idea is to continue
building the information for the scheduler domains even if the LLC information
can't be obtained. In case of failure, we assume all CPUs have only private
caches and no shared LLC.

Hope that makes sense. Let me know if you prefer to drop the error log or
anything else. I just added as we found cases of -ENOMEM on RISC-V and we
want to highlight that.

-- 
Regards,
Sudeep

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Sudeep Holla <sudeep.holla@arm.com>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: linux-kernel@vger.kernel.org, conor.dooley@microchip.com,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Ionela Voinescu <ionela.voinescu@arm.com>,
	Pierre Gondois <pierre.gondois@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-riscv@lists.infradead.org
Subject: Re: [PATCH -next] arch_topology: Fix cache attributes detection in the CPU hotplug path
Date: Wed, 13 Jul 2022 15:18:57 +0100	[thread overview]
Message-ID: <20220713141857.p3ruapm6b4in574j@bogus> (raw)
In-Reply-To: <Ys7QzJ14brtz23XY@kroah.com>

On Wed, Jul 13, 2022 at 04:03:56PM +0200, Greg Kroah-Hartman wrote:
> On Wed, Jul 13, 2022 at 02:33:44PM +0100, Sudeep Holla wrote:
> > init_cpu_topology() is called only once at the boot and all the cache
> > attributes are detected early for all the possible CPUs. However when
> > the CPUs are hotplugged out, the cacheinfo gets removed. While the
> > attributes are added back when the CPUs are hotplugged back in as part
> > of CPU hotplug state machine, it ends up called quite late after the
> > update_siblings_masks() are called in the secondary_start_kernel()
> > resulting in wrong llc_sibling_masks.
> > 
> > Move the call to detect_cache_attributes() inside update_siblings_masks()
> > to ensure the cacheinfo is updated before the LLC sibling masks are
> > updated. This will fix the incorrect LLC sibling masks generated when
> > the CPUs are hotplugged out and hotplugged back in again.
> > 
> > Reported-by: Ionela Voinescu <ionela.voinescu@arm.com>
> > Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
> > ---
> >  drivers/base/arch_topology.c | 16 ++++++----------
> >  1 file changed, 6 insertions(+), 10 deletions(-)
> > 
> > Hi Conor,
> > 
> > Ionela reported an issue with the CPU hotplug and as a fix I need to
> > move the call to detect_cache_attributes() which I had thought to keep
> > it there from first but for no reason had moved it to init_cpu_topology().
> > 
> > Wonder if this fixes the -ENOMEM on RISC-V as this one is called on the
> > cpu in the secondary CPUs init path while init_cpu_topology executed
> > detect_cache_attributes() for all possible CPUs much earlier. I think
> > this might help as the percpu memory might be initialised in this case.
> > 
> > Anyways give this a try, also test the CPU hotplug and check if nothing
> > is broken on RISC-V. We noticed this bug only on one platform while
> > 
> > Regards,
> > Sudeep
> > 
> > diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> > index 441e14ac33a4..0424b59b695e 100644
> > --- a/drivers/base/arch_topology.c
> > +++ b/drivers/base/arch_topology.c
> > @@ -732,7 +732,11 @@ const struct cpumask *cpu_clustergroup_mask(int cpu)
> >  void update_siblings_masks(unsigned int cpuid)
> >  {
> >  	struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
> > -	int cpu;
> > +	int cpu, ret;
> > +
> > +	ret = detect_cache_attributes(cpuid);
> > +	if (ret)
> > +		pr_info("Early cacheinfo failed, ret = %d\n", ret);
> 
> No erroring out?
> 

No, this is optional as not all platforms have cacheinfo in the DT and also
the scheduler must work even without the cache information. It may not produce
optimal performance but it must work.

Also we have seen on one RISC-V platform with probably low percpu allocation,
the early detection fails, but it works just fine later device_initcall().
That was the main reason for adding error log, but the idea is to continue
building the information for the scheduler domains even if the LLC information
can't be obtained. In case of failure, we assume all CPUs have only private
caches and no shared LLC.

Hope that makes sense. Let me know if you prefer to drop the error log or
anything else. I just added as we found cases of -ENOMEM on RISC-V and we
want to highlight that.

-- 
Regards,
Sudeep

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2022-07-13 14:19 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-13 13:33 [PATCH -next] arch_topology: Fix cache attributes detection in the CPU hotplug path Sudeep Holla
2022-07-13 13:33 ` Sudeep Holla
2022-07-13 13:33 ` Sudeep Holla
2022-07-13 14:03 ` Greg Kroah-Hartman
2022-07-13 14:03   ` Greg Kroah-Hartman
2022-07-13 14:03   ` Greg Kroah-Hartman
2022-07-13 14:18   ` Sudeep Holla [this message]
2022-07-13 14:18     ` Sudeep Holla
2022-07-13 14:18     ` Sudeep Holla
2022-07-13 16:04 ` Conor.Dooley
2022-07-13 16:04   ` Conor.Dooley
2022-07-13 16:04   ` Conor.Dooley
2022-07-14 14:17 ` Conor.Dooley
2022-07-14 14:17   ` Conor.Dooley
2022-07-14 14:17   ` Conor.Dooley
2022-07-14 15:01   ` Sudeep Holla
2022-07-14 15:01     ` Sudeep Holla
2022-07-14 15:01     ` Sudeep Holla
2022-07-14 15:27     ` Conor.Dooley
2022-07-14 15:27       ` Conor.Dooley
2022-07-14 15:27       ` Conor.Dooley
2022-07-14 16:00       ` Sudeep Holla
2022-07-14 16:00         ` Sudeep Holla
2022-07-14 16:00         ` Sudeep Holla
2022-07-14 16:10         ` Conor.Dooley
2022-07-14 16:10           ` Conor.Dooley
2022-07-14 16:10           ` Conor.Dooley
2022-07-15  9:11           ` Sudeep Holla
2022-07-15  9:11             ` Sudeep Holla
2022-07-15  9:11             ` Sudeep Holla
2022-07-15  9:16             ` Conor.Dooley
2022-07-15  9:16               ` Conor.Dooley
2022-07-15  9:16               ` Conor.Dooley
2022-07-15 14:04               ` Conor.Dooley
2022-07-15 14:04                 ` Conor.Dooley
2022-07-15 14:04                 ` Conor.Dooley
2022-07-15 15:41                 ` Sudeep Holla
2022-07-15 15:41                   ` Sudeep Holla
2022-07-15 15:41                   ` Sudeep Holla
2022-07-14 17:52 ` Ionela Voinescu
2022-07-14 17:52   ` Ionela Voinescu
2022-07-14 17:52   ` Ionela Voinescu
2022-07-18 17:41 ` Guenter Roeck
2022-07-18 17:41   ` Guenter Roeck
2022-07-18 17:41   ` Guenter Roeck
2022-07-18 17:57   ` Conor.Dooley
2022-07-18 17:57     ` Conor.Dooley
2022-07-18 17:57     ` Conor.Dooley
2022-07-19 10:29     ` Sudeep Holla
2022-07-19 10:29       ` Sudeep Holla
2022-07-19 10:29       ` Sudeep Holla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220713141857.p3ruapm6b4in574j@bogus \
    --to=sudeep.holla@arm.com \
    --cc=conor.dooley@microchip.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=ionela.voinescu@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=pierre.gondois@arm.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.