From: Viresh Kumar <viresh.kumar@linaro.org>
To: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Cc: andersson@kernel.org, krzysztof.kozlowski+dt@linaro.org,
rafael@kernel.org, robh+dt@kernel.org, johan@kernel.org,
devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org
Subject: Re: [PATCH v2 3/7] cpufreq: qcom-hw: Remove un-necessary cpumask_empty() check
Date: Wed, 2 Nov 2022 12:20:35 +0530 [thread overview]
Message-ID: <20221102065035.nf7m33acsjp4foit@vireshk-i7> (raw)
In-Reply-To: <20221025073254.1564622-4-manivannan.sadhasivam@linaro.org>
On 25-10-22, 13:02, Manivannan Sadhasivam wrote:
> CPUFreq core will always set the "policy->cpus" bitmask with the bitfield
> of the CPU that goes first per domain/policy. So there is no way the
> "policy->cpus" bitmask will be empty during qcom_cpufreq_hw_cpu_init().
>
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---
> drivers/cpufreq/qcom-cpufreq-hw.c | 5 -----
> 1 file changed, 5 deletions(-)
>
> diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c
> index d5ef3c66c762..a5b3b8d0e164 100644
> --- a/drivers/cpufreq/qcom-cpufreq-hw.c
> +++ b/drivers/cpufreq/qcom-cpufreq-hw.c
> @@ -552,11 +552,6 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
> data->per_core_dcvs = true;
>
> qcom_get_related_cpus(index, policy->cpus);
> - if (cpumask_empty(policy->cpus)) {
> - dev_err(dev, "Domain-%d failed to get related CPUs\n", index);
> - ret = -ENOENT;
> - goto error;
> - }
>
> policy->driver_data = data;
> policy->dvfs_possible_from_any_cpu = true;
Applied. Thanks.
I tried applying 4-6 as well, but git am failed. You can send such
cleanups separately, so they don't need to wait for others to reviews.
--
viresh
next prev parent reply other threads:[~2022-11-02 6:50 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-25 7:32 [PATCH v2 0/7] qcom-cpufreq-hw: Add CPU clock provider support Manivannan Sadhasivam
2022-10-25 7:32 ` [PATCH v2 1/7] dt-bindings: cpufreq: cpufreq-qcom-hw: Add cpufreq clock provider Manivannan Sadhasivam
2022-10-25 7:32 ` [PATCH v2 2/7] arm64: dts: qcom: sm8450: Supply clock from cpufreq node to CPUs Manivannan Sadhasivam
2022-10-25 7:32 ` [PATCH v2 3/7] cpufreq: qcom-hw: Remove un-necessary cpumask_empty() check Manivannan Sadhasivam
2022-11-02 6:50 ` Viresh Kumar [this message]
2022-10-25 7:32 ` [PATCH v2 4/7] cpufreq: qcom-hw: Allocate qcom_cpufreq_data during probe Manivannan Sadhasivam
2022-10-25 7:32 ` [PATCH v2 5/7] cpufreq: qcom-hw: Use cached dev pointer in probe() Manivannan Sadhasivam
2022-10-25 7:32 ` [PATCH v2 6/7] cpufreq: qcom-hw: Move soc_data to struct qcom_cpufreq Manivannan Sadhasivam
2022-10-25 7:32 ` [PATCH v2 7/7] cpufreq: qcom-hw: Add CPU clock provider support Manivannan Sadhasivam
2022-11-02 4:28 ` Viresh Kumar
2022-11-02 6:54 ` Manivannan Sadhasivam
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221102065035.nf7m33acsjp4foit@vireshk-i7 \
--to=viresh.kumar@linaro.org \
--cc=andersson@kernel.org \
--cc=devicetree@vger.kernel.org \
--cc=johan@kernel.org \
--cc=krzysztof.kozlowski+dt@linaro.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=manivannan.sadhasivam@linaro.org \
--cc=rafael@kernel.org \
--cc=robh+dt@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).