linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Sumit Gupta <sumitg@nvidia.com>
To: Sudeep Holla <sudeep.holla@arm.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	"linux-kernel@vger.kernel.org List"
	<linux-kernel@vger.kernel.org>,
	Mikko Perttunen <mperttunen@nvidia.com>,
	Hulk Robot <hulkci@huawei.com>, Bibek Basu <bbasu@nvidia.com>,
	Thierry Reding <thierry.reding@gmail.com>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	linux-tegra <linux-tegra@vger.kernel.org>,
	Sumit Gupta <sumitg@nvidia.com>,
	Jon Hunter <jonathanh@nvidia.com>, Will Deacon <will@kernel.org>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH -next] arm64: Export __cpu_logical_map
Date: Sat, 1 Aug 2020 17:46:43 +0530	[thread overview]
Message-ID: <e3a4bc21-c334-4d48-90b5-aab8d187939e@nvidia.com> (raw)
In-Reply-To: <20200727160515.GA8003@bogus>


>>>>> ERROR: modpost: "__cpu_logical_map" [drivers/cpufreq/tegra194-cpufreq.ko] undefined!
>>>>>
>>>>> ARM64 tegra194-cpufreq driver use cpu_logical_map, export
>>>>> __cpu_logical_map to fix build issue.
>>>>>
>>>
>>> I wonder why like other instances in the drivers, the mpidr is not get
>>> directly from the cpu. The cpufreq_driver->init call happens when the cpu
>>> is being brought online and is executed on the required cpu IIUC.
>>>
>> Yes, this occurs during hotplug case.
>> But in the case of system boot, 'cpufreq_driver->init' is called later
>> during cpufreq platform driver's probe. The value of CPU in 'policy->cpu'
>> can be different from the current CPU. That's why read_cpuid_mpidr() can't
>> be used.
>>
> 
> Fair enough, why not do cross call like in set_target ? Since it is one-off
> in init, I don't see any issue when you are doing it runtime for set_target.
> 
>>> read_cpuid_mpidr() is inline and avoids having to export the logical_cpu_map.
>>> Though we may not add physical hotplug anytime soon, less dependency
>>> on this cpu_logical_map is better given that we can resolve this without
>>> the need to access the map.
>>>
> 
> To be honest, we have tried to remove all the dependency on cluster id
> in generic code as it is not well defined. This one is tegra specific
> driver so should be fine. But I am still bit nervous to export
> cpu_logical_map as we have no clue what that would mean for physical
> hotplug.
> 
As suggested, I have done below change to get the cluster number using 
read_cpuid_mpidr(). Please review and suggest if this looks ok?
I will send formal patch if the change is fine.

Thanks,
Sumit

----

diff --git a/drivers/cpufreq/tegra194-cpufreq.c 
b/drivers/cpufreq/tegra194-cpufreq.c
index bae527e..06f5ccf 100644
--- a/drivers/cpufreq/tegra194-cpufreq.c
+++ b/drivers/cpufreq/tegra194-cpufreq.c
@@ -56,9 +56,11 @@ struct read_counters_work {

  static struct workqueue_struct *read_counters_wq;

-static enum cluster get_cpu_cluster(u8 cpu)
+static void get_cpu_cluster(void *cluster)
  {
-       return MPIDR_AFFINITY_LEVEL(cpu_logical_map(cpu), 1);
+       u64 mpidr = read_cpuid_mpidr() & MPIDR_HWID_BITMASK;
+
+       *((uint32_t *) cluster) = MPIDR_AFFINITY_LEVEL(mpidr, 1);
  }

  /*
@@ -186,8 +188,10 @@ static unsigned int tegra194_get_speed(u32 cpu)
  static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
  {
         struct tegra194_cpufreq_data *data = cpufreq_get_driver_data();
-       int cl = get_cpu_cluster(policy->cpu);
         u32 cpu;
+       u32 cl;
+
+       smp_call_function_single(policy->cpu, get_cpu_cluster, &cl, true);


> 
> --
> Regards,
> Sudeep
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2020-08-01 12:19 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20200724030433.22287-1-wangkefeng.wang@huawei.com>
2020-07-24  3:08 ` [PATCH -next] arm64: Export __cpu_logical_map Kefeng Wang
2020-07-24  8:16   ` Anshuman Khandual
2020-07-24  9:13     ` Mark Rutland
2020-07-24  9:35       ` Catalin Marinas
2020-07-24 10:33         ` Anshuman Khandual
2020-07-24  9:16     ` Kefeng Wang
2020-07-24  9:30     ` Catalin Marinas
2020-07-24 10:36       ` Anshuman Khandual
2020-07-24  9:43   ` Christoph Hellwig
2020-07-24 13:10   ` Sudeep Holla
2020-07-25  2:00     ` Kefeng Wang
2020-07-26 11:46     ` Sumit Gupta
2020-07-27 16:05       ` Sudeep Holla
2020-08-01 12:16         ` Sumit Gupta [this message]
2020-08-10  7:49           ` Sudeep Holla
2020-08-10 10:19             ` Catalin Marinas
2020-08-10 12:11               ` Sudeep Holla
2020-08-11 19:44               ` Sumit Gupta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e3a4bc21-c334-4d48-90b5-aab8d187939e@nvidia.com \
    --to=sumitg@nvidia.com \
    --cc=bbasu@nvidia.com \
    --cc=catalin.marinas@arm.com \
    --cc=hulkci@huawei.com \
    --cc=jonathanh@nvidia.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-tegra@vger.kernel.org \
    --cc=mperttunen@nvidia.com \
    --cc=sudeep.holla@arm.com \
    --cc=thierry.reding@gmail.com \
    --cc=viresh.kumar@linaro.org \
    --cc=wangkefeng.wang@huawei.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).