From: Nicola Mazzucato <nicola.mazzucato@arm.com>
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, linux-pm@vger.kernel.org,
sudeep.holla@arm.com, rjw@rjwysocki.net, vireshk@kernel.org,
cristian.marussi@arm.com, morten.rasmussen@arm.com,
chris.redpath@arm.com
Subject: Re: [PATCH v6 2/4] scmi-cpufreq: Move CPU initialisation to probe
Date: Thu, 14 Jan 2021 13:37:44 +0000 [thread overview]
Message-ID: <027a2d94-b804-2c99-ad1a-23927d361fe5@arm.com> (raw)
In-Reply-To: <20210114050753.u3b5z6kgbup7xbuh@vireshk-i7>
Hi Viresh,
many thanks for your suggestions. I will prepare a new version based on those.
Many thanks,
Nicola
On 1/14/21 5:07 AM, Viresh Kumar wrote:
> On 13-01-21, 11:55, Nicola Mazzucato wrote:
>> On 1/12/21 11:17 AM, Viresh Kumar wrote:
>>> This could have been done with a per-cpu variable instead.
>>
>> sure, I can do a DEFINE_PER_CPU() for it if it makes it better.
>
> If we don't go with the linked list approach, then yes.
>
>>>> + for_each_possible_cpu(cpu) {
>>>> + if (!zalloc_cpumask_var(&cpudata_table[cpu].scmi_shared_cpus,
>>>> + GFP_KERNEL))
>>>> + goto out;
>>>> + }
>>>
>>> You are making a copy of the struct for each CPU and so for a 16 CPUs
>>> sharing their clock lines, you will have 16 copies of the exact same
>>> stuff.
>>>
>>> An optimal approach would be to have a linked-list of this structure
>>> and that will only have 1 node per cpufreq policy.
>>
>> It is allocating space for the cpumask for each of the cpu. No data is copied yet.
>
> Yes, I was talking about the whole design here.
>
>> I understand the optimisation, but I don't see a linkage to cpufreq policy to be
>> a good idea. This cpudata is for internal storage of scmi and opp-shared info
>> and it is not tied to cpufreq policy.
>
> Well, it is going to be the same information for all CPUs of a policy, isn't it
> ?
>
>> We have moved all the cpu bits to probe
>> and at this stage we have no knowledge of cpufreq polices.
>
> Yes, but you are reading that information from scmi or DT (empty opp tables) and
> so you know what the cpumasks are going to be set to. The linked list is the
> right solution in my opinion, it is much more optimal.
>
>>>> +static int scmi_init_device(const struct scmi_handle *handle, int cpu)
>>>> +{
>>>> + struct device *cpu_dev;
>>>> + int ret, nr_opp;
>>>> + struct em_data_callback em_cb = EM_DATA_CB(scmi_get_cpu_power);
>>>> + bool power_scale_mw;
>>>> + cpumask_var_t scmi_cpus;
>>>> +
>>>> + if (!zalloc_cpumask_var(&scmi_cpus, GFP_KERNEL))
>>>> + return -ENOMEM;
>>>> +
>>>> + cpumask_set_cpu(cpu, scmi_cpus);
>>>> +
>>>> + cpu_dev = get_cpu_device(cpu);
>>>> +
>>>> + ret = scmi_get_sharing_cpus(cpu_dev, scmi_cpus);
>>>
>>> Where do you expect the sharing information to come from in this case
>>> ? DT ?
>>
>> Coming from SCMI perf. The source of info has not changed.
>>
>>>
>>>> + if (ret) {
>>>> + dev_warn(cpu_dev, "failed to get sharing cpumask\n");
>>>> + goto free_cpumask;
>>>> + }
>>>> +
>>>> + /*
>>>> + * We get here for each CPU. Add OPPs only on those CPUs for which we
>>>> + * haven't already done so, or set their OPPs as shared.
>>>> + */
>>>> + nr_opp = dev_pm_opp_get_opp_count(cpu_dev);
>>>> + if (nr_opp <= 0) {
>>>> + ret = handle->perf_ops->device_opps_add(handle, cpu_dev);
>>>> + if (ret) {
>>>> + dev_warn(cpu_dev, "failed to add opps to the device\n");
>>>> + goto free_cpumask;
>>>> + }
>>>> +
>>>> + ret = dev_pm_opp_set_sharing_cpus(cpu_dev, scmi_cpus);
>>>> + if (ret) {
>>>> + dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n",
>>>> + __func__, ret);
>>>> + goto free_cpumask;
>>>> + }
>>>> +
>>>> + nr_opp = dev_pm_opp_get_opp_count(cpu_dev);
>>>
>>> Shouldn't you do this just after adding the OPPs ?
>>
>> This was suggested earlier. It was moved closer to em_registration to where the
>> nr_opp is used. One way or the other as I don't have a strong preference for its
>> place.
>
> It is better to move it above as this will shorten the error path.
>
next prev parent reply other threads:[~2021-01-14 13:36 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-11 15:45 [PATCH v6 0/4] CPUFreq: Add support for opp-sharing cpus Nicola Mazzucato
2021-01-11 15:45 ` [PATCH v6 1/4] scmi-cpufreq: Remove deferred probe Nicola Mazzucato
2021-01-11 15:45 ` [PATCH v6 2/4] scmi-cpufreq: Move CPU initialisation to probe Nicola Mazzucato
2021-01-12 11:17 ` Viresh Kumar
2021-01-13 11:55 ` Nicola Mazzucato
2021-01-14 5:07 ` Viresh Kumar
2021-01-14 13:37 ` Nicola Mazzucato [this message]
2021-01-14 16:54 ` Cristian Marussi
2021-01-30 9:43 ` Nicola Mazzucato
2021-01-11 15:45 ` [PATCH v6 3/4] scmi-cpufreq: Get opp_shared_cpus from opp-v2 for EM Nicola Mazzucato
2021-01-12 11:20 ` Viresh Kumar
2021-01-13 12:00 ` Nicola Mazzucato
2021-01-11 15:45 ` [PATCH v6 4/4] cpufreq: blacklist Arm Vexpress platforms in cpufreq-dt-platdev Nicola Mazzucato
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=027a2d94-b804-2c99-ad1a-23927d361fe5@arm.com \
--to=nicola.mazzucato@arm.com \
--cc=chris.redpath@arm.com \
--cc=cristian.marussi@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=morten.rasmussen@arm.com \
--cc=rjw@rjwysocki.net \
--cc=sudeep.holla@arm.com \
--cc=viresh.kumar@linaro.org \
--cc=vireshk@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).