All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data
@ 2014-09-02  3:41 Viresh Kumar
  2014-09-02  3:41 ` [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask' Viresh Kumar
                   ` (4 more replies)
  0 siblings, 5 replies; 25+ messages in thread
From: Viresh Kumar @ 2014-09-02  3:41 UTC (permalink / raw)
  To: Rafael Wysocki, yuantian.tang
  Cc: linaro-kernel, linux-pm, hongbo.zhang, leoli, Viresh Kumar

'cpu_data' is updated for policy->cpu first and then for all CPUs in
policy->cpus. policy->cpus is guaranteed to contain policy->cpu as well and so
the first write to 'cpu_data' for policy->cpu is redundant. Remove it.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
Hi Yuantian,

I was looking into this driver due to issues reported by Hongtao (cc'd) and
found that we can live without some code. These aren't fixing any bugs and are
just cleanups.

I didn't had a compiler for this and so this isn't even compiled. It would be
great if you can please review/test these patches.

 drivers/cpufreq/ppc-corenet-cpufreq.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/cpufreq/ppc-corenet-cpufreq.c b/drivers/cpufreq/ppc-corenet-cpufreq.c
index 3607070..bee5df7 100644
--- a/drivers/cpufreq/ppc-corenet-cpufreq.c
+++ b/drivers/cpufreq/ppc-corenet-cpufreq.c
@@ -199,7 +199,6 @@ static int corenet_cpufreq_cpu_init(struct cpufreq_policy *policy)
 	}
 
 	data->table = table;
-	per_cpu(cpu_data, cpu) = data;
 
 	/* update ->cpus if we have cluster, no harm if not */
 	cpumask_copy(policy->cpus, per_cpu(cpu_mask, cpu));
-- 
2.0.3.693.g996b0fd


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask'
  2014-09-02  3:41 [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data Viresh Kumar
@ 2014-09-02  3:41 ` Viresh Kumar
  2014-09-02  6:46   ` Yuantian Tang
  2014-09-02  6:38 ` [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data Yuantian Tang
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 25+ messages in thread
From: Viresh Kumar @ 2014-09-02  3:41 UTC (permalink / raw)
  To: Rafael Wysocki, yuantian.tang
  Cc: linaro-kernel, linux-pm, hongbo.zhang, leoli, Viresh Kumar

We are copying cpu_core_mask(cpu) in a per-cpu local variable: 'cpu_mask'. There
is no need of doing this and can be replaced by a call to cpu_core_mask().

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
 drivers/cpufreq/ppc-corenet-cpufreq.c | 26 +++-----------------------
 1 file changed, 3 insertions(+), 23 deletions(-)

diff --git a/drivers/cpufreq/ppc-corenet-cpufreq.c b/drivers/cpufreq/ppc-corenet-cpufreq.c
index bee5df7..dbcac39 100644
--- a/drivers/cpufreq/ppc-corenet-cpufreq.c
+++ b/drivers/cpufreq/ppc-corenet-cpufreq.c
@@ -69,9 +69,6 @@ static const u32 *fmask;
 
 static DEFINE_PER_CPU(struct cpu_data *, cpu_data);
 
-/* cpumask in a cluster */
-static DEFINE_PER_CPU(cpumask_var_t, cpu_mask);
-
 #ifndef CONFIG_SMP
 static inline const struct cpumask *cpu_core_mask(int cpu)
 {
@@ -201,8 +198,8 @@ static int corenet_cpufreq_cpu_init(struct cpufreq_policy *policy)
 	data->table = table;
 
 	/* update ->cpus if we have cluster, no harm if not */
-	cpumask_copy(policy->cpus, per_cpu(cpu_mask, cpu));
-	for_each_cpu(i, per_cpu(cpu_mask, cpu))
+	cpumask_copy(policy->cpus, cpu_core_mask(cpu));
+	for_each_cpu(i, cpu_core_mask(cpu))
 		per_cpu(cpu_data, i) = data;
 
 	/* Minimum transition latency is 12 platform clocks */
@@ -236,7 +233,7 @@ static int __exit corenet_cpufreq_cpu_exit(struct cpufreq_policy *policy)
 	kfree(data->table);
 	kfree(data);
 
-	for_each_cpu(cpu, per_cpu(cpu_mask, policy->cpu))
+	for_each_cpu(cpu, cpu_core_mask(policy->cpu))
 		per_cpu(cpu_data, cpu) = NULL;
 
 	return 0;
@@ -285,12 +282,6 @@ static int __init ppc_corenet_cpufreq_init(void)
 	if (!np)
 		return -ENODEV;
 
-	for_each_possible_cpu(cpu) {
-		if (!alloc_cpumask_var(&per_cpu(cpu_mask, cpu), GFP_KERNEL))
-			goto err_mask;
-		cpumask_copy(per_cpu(cpu_mask, cpu), cpu_core_mask(cpu));
-	}
-
 	match = of_match_node(node_matches, np);
 	data = match->data;
 	if (data) {
@@ -308,22 +299,11 @@ static int __init ppc_corenet_cpufreq_init(void)
 		pr_info("Freescale PowerPC corenet CPU frequency scaling driver\n");
 
 	return ret;
-
-err_mask:
-	for_each_possible_cpu(cpu)
-		free_cpumask_var(per_cpu(cpu_mask, cpu));
-
-	return -ENOMEM;
 }
 module_init(ppc_corenet_cpufreq_init);
 
 static void __exit ppc_corenet_cpufreq_exit(void)
 {
-	unsigned int cpu;
-
-	for_each_possible_cpu(cpu)
-		free_cpumask_var(per_cpu(cpu_mask, cpu));
-
 	cpufreq_unregister_driver(&ppc_corenet_cpufreq_driver);
 }
 module_exit(ppc_corenet_cpufreq_exit);
-- 
2.0.3.693.g996b0fd


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* RE: [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data
  2014-09-02  3:41 [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data Viresh Kumar
  2014-09-02  3:41 ` [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask' Viresh Kumar
@ 2014-09-02  6:38 ` Yuantian Tang
  2014-09-02  6:47   ` Viresh Kumar
  2014-09-10  4:32 ` Viresh Kumar
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 25+ messages in thread
From: Yuantian Tang @ 2014-09-02  6:38 UTC (permalink / raw)
  To: Viresh Kumar, Rafael Wysocki; +Cc: linaro-kernel, linux-pm, LeoLi, Hongtao Jia

Hello Viresh:

I am not sure if this is right in CPU hotplug case.

Hongtao:
Please verify this in CPU hotplug case.

Thanks,
Yuantian

> -----Original Message-----
> From: Viresh Kumar [mailto:viresh.kumar@linaro.org]
> Sent: Tuesday, September 02, 2014 11:41 AM
> To: Rafael Wysocki; Tang Yuantian-B29983
> Cc: linaro-kernel@lists.linaro.org; linux-pm@vger.kernel.org; Zhang
> Hongbo-B45939; Li Yang-Leo-R58472; Viresh Kumar
> Subject: [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data
> 
> 'cpu_data' is updated for policy->cpu first and then for all CPUs in
> policy->cpus. policy->cpus is guaranteed to contain policy->cpu as well
> policy->and so
> the first write to 'cpu_data' for policy->cpu is redundant. Remove it.
> 
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> ---
> Hi Yuantian,
> 
> I was looking into this driver due to issues reported by Hongtao (cc'd) and found
> that we can live without some code. These aren't fixing any bugs and are just
> cleanups.
> 
> I didn't had a compiler for this and so this isn't even compiled. It would be great if
> you can please review/test these patches.
> 
>  drivers/cpufreq/ppc-corenet-cpufreq.c | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/drivers/cpufreq/ppc-corenet-cpufreq.c
> b/drivers/cpufreq/ppc-corenet-cpufreq.c
> index 3607070..bee5df7 100644
> --- a/drivers/cpufreq/ppc-corenet-cpufreq.c
> +++ b/drivers/cpufreq/ppc-corenet-cpufreq.c
> @@ -199,7 +199,6 @@ static int corenet_cpufreq_cpu_init(struct cpufreq_policy
> *policy)
>  	}
> 
>  	data->table = table;
> -	per_cpu(cpu_data, cpu) = data;
> 
>  	/* update ->cpus if we have cluster, no harm if not */
>  	cpumask_copy(policy->cpus, per_cpu(cpu_mask, cpu));
> --
> 2.0.3.693.g996b0fd


^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask'
  2014-09-02  3:41 ` [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask' Viresh Kumar
@ 2014-09-02  6:46   ` Yuantian Tang
  2014-09-02  6:49     ` Viresh Kumar
  0 siblings, 1 reply; 25+ messages in thread
From: Yuantian Tang @ 2014-09-02  6:46 UTC (permalink / raw)
  To: Viresh Kumar, Rafael Wysocki; +Cc: linaro-kernel, linux-pm, Hongbo Zhang, LeoLi

Hello Viresh:

Probably it is not right considering the CPU cluster and CPU hotplug case.

Hongtao:
Please verify it on t4240 platform which have cluster.

Thanks,
Yuantian

> -----Original Message-----
> From: Viresh Kumar [mailto:viresh.kumar@linaro.org]
> Sent: Tuesday, September 02, 2014 11:41 AM
> To: Rafael Wysocki; Tang Yuantian-B29983
> Cc: linaro-kernel@lists.linaro.org; linux-pm@vger.kernel.org; Zhang
> Hongbo-B45939; Li Yang-Leo-R58472; Viresh Kumar
> Subject: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask'
> 
> We are copying cpu_core_mask(cpu) in a per-cpu local variable: 'cpu_mask'.
> There is no need of doing this and can be replaced by a call to cpu_core_mask().
> 
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> ---
>  drivers/cpufreq/ppc-corenet-cpufreq.c | 26 +++-----------------------
>  1 file changed, 3 insertions(+), 23 deletions(-)
> 
> diff --git a/drivers/cpufreq/ppc-corenet-cpufreq.c
> b/drivers/cpufreq/ppc-corenet-cpufreq.c
> index bee5df7..dbcac39 100644
> --- a/drivers/cpufreq/ppc-corenet-cpufreq.c
> +++ b/drivers/cpufreq/ppc-corenet-cpufreq.c
> @@ -69,9 +69,6 @@ static const u32 *fmask;
> 
>  static DEFINE_PER_CPU(struct cpu_data *, cpu_data);
> 
> -/* cpumask in a cluster */
> -static DEFINE_PER_CPU(cpumask_var_t, cpu_mask);
> -
>  #ifndef CONFIG_SMP
>  static inline const struct cpumask *cpu_core_mask(int cpu)  { @@ -201,8
> +198,8 @@ static int corenet_cpufreq_cpu_init(struct cpufreq_policy *policy)
>  	data->table = table;
> 
>  	/* update ->cpus if we have cluster, no harm if not */
> -	cpumask_copy(policy->cpus, per_cpu(cpu_mask, cpu));
> -	for_each_cpu(i, per_cpu(cpu_mask, cpu))
> +	cpumask_copy(policy->cpus, cpu_core_mask(cpu));
> +	for_each_cpu(i, cpu_core_mask(cpu))
>  		per_cpu(cpu_data, i) = data;
> 
>  	/* Minimum transition latency is 12 platform clocks */ @@ -236,7 +233,7
> @@ static int __exit corenet_cpufreq_cpu_exit(struct cpufreq_policy *policy)
>  	kfree(data->table);
>  	kfree(data);
> 
> -	for_each_cpu(cpu, per_cpu(cpu_mask, policy->cpu))
> +	for_each_cpu(cpu, cpu_core_mask(policy->cpu))
>  		per_cpu(cpu_data, cpu) = NULL;
> 
>  	return 0;
> @@ -285,12 +282,6 @@ static int __init ppc_corenet_cpufreq_init(void)
>  	if (!np)
>  		return -ENODEV;
> 
> -	for_each_possible_cpu(cpu) {
> -		if (!alloc_cpumask_var(&per_cpu(cpu_mask, cpu), GFP_KERNEL))
> -			goto err_mask;
> -		cpumask_copy(per_cpu(cpu_mask, cpu), cpu_core_mask(cpu));
> -	}
> -
>  	match = of_match_node(node_matches, np);
>  	data = match->data;
>  	if (data) {
> @@ -308,22 +299,11 @@ static int __init ppc_corenet_cpufreq_init(void)
>  		pr_info("Freescale PowerPC corenet CPU frequency scaling driver\n");
> 
>  	return ret;
> -
> -err_mask:
> -	for_each_possible_cpu(cpu)
> -		free_cpumask_var(per_cpu(cpu_mask, cpu));
> -
> -	return -ENOMEM;
>  }
>  module_init(ppc_corenet_cpufreq_init);
> 
>  static void __exit ppc_corenet_cpufreq_exit(void)  {
> -	unsigned int cpu;
> -
> -	for_each_possible_cpu(cpu)
> -		free_cpumask_var(per_cpu(cpu_mask, cpu));
> -
>  	cpufreq_unregister_driver(&ppc_corenet_cpufreq_driver);
>  }
>  module_exit(ppc_corenet_cpufreq_exit);
> --
> 2.0.3.693.g996b0fd


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data
  2014-09-02  6:38 ` [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data Yuantian Tang
@ 2014-09-02  6:47   ` Viresh Kumar
  0 siblings, 0 replies; 25+ messages in thread
From: Viresh Kumar @ 2014-09-02  6:47 UTC (permalink / raw)
  To: Yuantian Tang; +Cc: Rafael Wysocki, linaro-kernel, linux-pm, LeoLi, Hongtao Jia

On 2 September 2014 12:08, Yuantian Tang <Yuantian.Tang@freescale.com> wrote:
> Hello Viresh:
>
> I am not sure if this is right in CPU hotplug case.

policy->cpus must have all the online CPUs and so it should work for all
online CPUs. what's your doubt?

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask'
  2014-09-02  6:46   ` Yuantian Tang
@ 2014-09-02  6:49     ` Viresh Kumar
  2014-09-02  7:02       ` Yuantian Tang
  0 siblings, 1 reply; 25+ messages in thread
From: Viresh Kumar @ 2014-09-02  6:49 UTC (permalink / raw)
  To: Yuantian Tang
  Cc: Rafael Wysocki, linaro-kernel, linux-pm, Hongbo Zhang, LeoLi

On 2 September 2014 12:16, Yuantian Tang <Yuantian.Tang@freescale.com> wrote:
> Hello Viresh:
>
> Probably it is not right considering the CPU cluster and CPU hotplug case.

So in case of hotplug the worst that can happen is policy->cpu gets updated.
But because we are all concerned about cpu_data here, which is stored in
per-cpu variables and is updated for all online CPUs, it should work.

I would like to see how that fails :)

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask'
  2014-09-02  6:49     ` Viresh Kumar
@ 2014-09-02  7:02       ` Yuantian Tang
  2014-09-02  7:09         ` Viresh Kumar
  0 siblings, 1 reply; 25+ messages in thread
From: Yuantian Tang @ 2014-09-02  7:02 UTC (permalink / raw)
  To: Viresh Kumar; +Cc: Rafael Wysocki, linaro-kernel, linux-pm, Hongbo Zhang, LeoLi

I am saying your patch 2 probably is wrong, not patch 1.
Consider following case:
On T4240 platform, we have 3 cluster with 8 cpu in each cluster.
We offline 4 5 6 7 cpu and then online them back.
Cpu 4's policy->cpus is 0, 1, 2, 3, 4 
Cpu 5's policy->cpus is 0, 1, 2, 3, 4, 5.
....
So cpu 4 and 5's policy->cpus are not right, they should be 0, 1, 2, 3, 4, 5, 6, 7.

Thanks,
Yuantian

> -----Original Message-----
> From: Viresh Kumar [mailto:viresh.kumar@linaro.org]
> Sent: Tuesday, September 02, 2014 2:49 PM
> To: Tang Yuantian-B29983
> Cc: Rafael Wysocki; linaro-kernel@lists.linaro.org; linux-pm@vger.kernel.org;
> Zhang Hongbo-B45939; Li Yang-Leo-R58472
> Subject: Re: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable
> 'cpu_mask'
> 
> On 2 September 2014 12:16, Yuantian Tang <Yuantian.Tang@freescale.com>
> wrote:
> > Hello Viresh:
> >
> > Probably it is not right considering the CPU cluster and CPU hotplug case.
> 
> So in case of hotplug the worst that can happen is policy->cpu gets updated.
> But because we are all concerned about cpu_data here, which is stored in
> per-cpu variables and is updated for all online CPUs, it should work.
> 
> I would like to see how that fails :)

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask'
  2014-09-02  7:02       ` Yuantian Tang
@ 2014-09-02  7:09         ` Viresh Kumar
  2014-09-02  8:08           ` Yuantian Tang
  2014-09-03  8:02           ` Yuantian Tang
  0 siblings, 2 replies; 25+ messages in thread
From: Viresh Kumar @ 2014-09-02  7:09 UTC (permalink / raw)
  To: Yuantian Tang
  Cc: Rafael Wysocki, linaro-kernel, linux-pm, Hongbo Zhang, LeoLi

On 2 September 2014 12:32, Yuantian Tang <Yuantian.Tang@freescale.com> wrote:
> I am saying your patch 2 probably is wrong, not patch 1.

Okay, it looked initially that both are screwed up :)

> Consider following case:
> On T4240 platform, we have 3 cluster with 8 cpu in each cluster.
> We offline 4 5 6 7 cpu and then online them back.

i.e. last four CPUs of first cluster..

> Cpu 4's policy->cpus is 0, 1, 2, 3, 4
> Cpu 5's policy->cpus is 0, 1, 2, 3, 4, 5.

How? And how is this patch going to touch policy->cpus?
CPU 0-7 should be sharing a single 'struct cpufreq_policy'
and so policy->cpus should be same for all..

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask'
  2014-09-02  7:09         ` Viresh Kumar
@ 2014-09-02  8:08           ` Yuantian Tang
  2014-09-03  8:02           ` Yuantian Tang
  1 sibling, 0 replies; 25+ messages in thread
From: Yuantian Tang @ 2014-09-02  8:08 UTC (permalink / raw)
  To: Viresh Kumar; +Cc: Rafael Wysocki, linaro-kernel, linux-pm, Hongbo Zhang, LeoLi

If policy->cpus is same for all cpus in a cluster that should not be a problem.
Probably there's something else that need to check.
I have feelings that your patch may not be correct. I hope it works though.

Thanks,
Yuantian

> -----Original Message-----
> From: Viresh Kumar [mailto:viresh.kumar@linaro.org]
> Sent: Tuesday, September 02, 2014 3:10 PM
> To: Tang Yuantian-B29983
> Cc: Rafael Wysocki; linaro-kernel@lists.linaro.org; linux-pm@vger.kernel.org;
> Zhang Hongbo-B45939; Li Yang-Leo-R58472
> Subject: Re: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable
> 'cpu_mask'
> 
> On 2 September 2014 12:32, Yuantian Tang <Yuantian.Tang@freescale.com>
> wrote:
> > I am saying your patch 2 probably is wrong, not patch 1.
> 
> Okay, it looked initially that both are screwed up :)
> 
> > Consider following case:
> > On T4240 platform, we have 3 cluster with 8 cpu in each cluster.
> > We offline 4 5 6 7 cpu and then online them back.
> 
> i.e. last four CPUs of first cluster..
> 
> > Cpu 4's policy->cpus is 0, 1, 2, 3, 4
> > Cpu 5's policy->cpus is 0, 1, 2, 3, 4, 5.
> 
> How? And how is this patch going to touch policy->cpus?
> CPU 0-7 should be sharing a single 'struct cpufreq_policy'
> and so policy->cpus should be same for all..

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask'
  2014-09-02  7:09         ` Viresh Kumar
  2014-09-02  8:08           ` Yuantian Tang
@ 2014-09-03  8:02           ` Yuantian Tang
  2014-09-03  9:50             ` Viresh Kumar
  1 sibling, 1 reply; 25+ messages in thread
From: Yuantian Tang @ 2014-09-03  8:02 UTC (permalink / raw)
  To: Viresh Kumar; +Cc: Rafael Wysocki, linaro-kernel, linux-pm, LeoLi, Hongtao Jia

Hello Viresh,

Test confirmed what I am concerned. Patch 1 is working but patch 2 didn't working in hotplug case.
(tested on t4240 platform with 3 cluster and 8 cpus in each cluster, kernel version is 3.12).

Log is as follows:
root@t4240rdb:~# cd /sys/devices/system/
root@t4240rdb:/sys/devices/system# ls
clockevents  clocksource  cpu  edac
root@t4240rdb:/sys/devices/system# cd cpu/
root@t4240rdb:/sys/devices/system/cpu# ls
cpu0  cpu1  cpu10  cpu11  cpu12  cpu13  cpu14  cpu15  cpu16  cpu17  cpu18  cpu19  cpu2  cpu20  cpu21  cpu22  cpu23  cpu3  cpu4cpu5  cpu6  cpu7  cpu8  cpu9  kernel_max  offline  online  possible  present  probe  release  uevent
root@t4240rdb:/sys/devices/system/cpu# cat cpu8/
altivec_idle            altivec_idle_wait_time  cache/                  cpufreq/                online                  physical_id             pir                     pw20_state              pw20_wait_time          smt_snooze_delay        subsystem/              topology/               uevent
root@t4240rdb:/sys/devices/system/cpu# cat cpu8/cpufreq/affected_cpus
8 9 10 11 12 13 14 15
root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu8/online
Cannot set affinity for irq 467
root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu9/online
root@t4240rdb:/sys/devices/system/cpu# cat cpu10/cpufreq/affected_cpus
10 11 12 13 14 15
root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu10/online
root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu11/online
root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu12/online
root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu13/online
root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu14/online
root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu15/online
root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu9/online
smp_85xx_kick_cpu: Can not start CPU #9. Start CPU #8 first.
smp: failed starting cpu 9 (rc -2)
-sh: echo: write error: No such file or directory
root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu10/online
root@t4240rdb:/sys/devices/system/cpu# cat cpu10/cpufreq/affected_cpus
10
root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu11/online
sysfs: cannot create duplicate filename '/devices/system/cpu/cpu10/cpufreq'
------------[ cut here ]------------
WARNING: at fs/sysfs/dir.c:526
Modules linked in:
CPU: 18 PID: 2658 Comm: sh Not tainted 3.12.19-rt30-00018-g7eac4c0-dirty #2
task: c00000015fa0b440 ti: c00000007a54c000 task.ti: c00000007a54c000
NIP: c00000000018ea34 LR: c00000000018ea30 CTR: c000000000304330
REGS: c00000007a54f340 TRAP: 0700   Not tainted  (3.12.19-rt30-00018-g7eac4c0-dirty)
MSR: 0000000080029000 <CE,EE,ME>  CR: 44222444  XER: 20000000
SOFTE: 1

GPR00: c00000000018ea30 c00000007a54f5c0 c000000000b4d1a0 000000000000004b
GPR04: 0000000044222444 c000000009620e70 0000000000000008 0000000000000008
GPR08: 0000000000000000 0000000000000001 80000800803b8500 0000000000000000
GPR12: 0000000024222442 c00000000fffc700 0000000000000000 00000000100e0000
GPR16: 00000000100daea4 00000000100e0000 c0000000009738b8 c000000179a29e58
GPR20: c000000000a42680 c000000000c0d1a0 c000000000b57c20 c000000000b57600
GPR24: 0000000000000000 c000000000a789e0 0000000000000001 0000000000000000
GPR28: c00000017545c000 c00000007a54f6c0 c000000179962278 ffffffffffffffef
NIP [c00000000018ea34] .sysfs_add_one+0xd0/0xec
LR [c00000000018ea30] .sysfs_add_one+0xcc/0xec
Call Trace:
[c00000007a54f5c0] [c00000000018ea30] .sysfs_add_one+0xcc/0xec (unreliable)
[c00000007a54f650] [c00000000018faf8] .sysfs_do_create_link_sd+0xfc/0x2bc
[c00000007a54f700] [c00000000056468c] .__cpufreq_add_dev.isra.24+0x728/0x768
[c00000007a54f7e0] [c000000000564758] .cpufreq_cpu_callback+0x84/0x108
[c00000007a54f860] [c000000000067218] .notifier_call_chain+0x80/0xe8
[c00000007a54f900] [c00000000003be20] .__cpu_notify+0x34/0x78
[c00000007a54f970] [c00000000003c5cc] .cpu_up+0x194/0x1e4
[c00000007a54fa20] [c0000000005f6318] .cpu_subsys_online+0x24/0x4c
[c00000007a54fab0] [c0000000003173dc] .device_online+0xa4/0x104
[c00000007a54fb40] [c0000000003174c4] .online_store+0x88/0x90
[c00000007a54fbd0] [c0000000003143d0] .dev_attr_store+0x30/0x4c
[c00000007a54fc40] [c00000000018c98c] .sysfs_write_file+0xe8/0x1ac
[c00000007a54fcf0] [c00000000011324c] .vfs_write+0xe8/0x21c
[c00000007a54fd90] [c000000000113858] .SyS_write+0x58/0xcc
[c00000007a54fe30] [c000000000000598] syscall_exit+0x0/0x8c
Instruction dump:
4810e775 60000000 e89e0010 7f83e378 38a01000 4810e761 60000000 7f84e378
3c62ffde 38630e60 485b3755 60000000 <0fe00000> 7f83e378 4bf7d43d 60000000
---[ end trace 275aa88102f60f48 ]---
root@t4240rdb:/sys/devices/system/cpu#

the original driver is OK, the log is:
t4240rdb login: root
root@t4240rdb:~#
root@t4240rdb:~#
root@t4240rdb:~#
root@t4240rdb:~# cd /sys/devices/system/cpu/
root@t4240rdb:/sys/devices/system/cpu# cat cpu8/cpufreq/affected_cpus
8 9 10 11 12 13 14 15
root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu8/online
Cannot set affinity for irq 467
root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu9/online
root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu10/online
root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu11/online
root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu12/online
root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu13/online
root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu14/online
root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu15/online
root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu14/online
root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu15/online
root@t4240rdb:/sys/devices/system/cpu# cat cpu14/cpufreq/affected_cpus
14 15
root@t4240rdb:/sys/devices/system/cpu# cat cpu15/cpufreq/affected_cpus
14 15
root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu12/online
root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu13/online
root@t4240rdb:/sys/devices/system/cpu# cat cpu15/cpufreq/affected_cpus
12 13 14 15
root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu10/online
root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu11/online
root@t4240rdb:/sys/devices/system/cpu# cat cpu15/cpufreq/affected_cpus
10 11 12 13 14 15
root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu8/online
root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu9/online
root@t4240rdb:/sys/devices/system/cpu# cat cpu15/cpufreq/affected_cpus
8 9 10 11 12 13 14 15
root@t4240rdb:/sys/devices/system/cpu# cat cpu9/cpufreq/affected_cpus
8 9 10 11 12 13 14 15
root@t4240rdb:/sys/devices/system/cpu#
root@t4240rdb:/sys/devices/system/cpu# uname -a
Linux t4240rdb 3.12.19-rt30-00018-g7eac4c0-dirty #4 SMP Wed Sep 3 14:56:49 CST 2014 ppc64 GNU/Linux
root@t4240rdb:/sys/devices/system/cpu#

Thanks,
Yuantian

> -----Original Message-----
> From: Viresh Kumar [mailto:viresh.kumar@linaro.org]
> Sent: Tuesday, September 02, 2014 3:10 PM
> To: Tang Yuantian-B29983
> Cc: Rafael Wysocki; linaro-kernel@lists.linaro.org; linux-pm@vger.kernel.org;
> Zhang Hongbo-B45939; Li Yang-Leo-R58472
> Subject: Re: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable
> 'cpu_mask'
> 
> On 2 September 2014 12:32, Yuantian Tang <Yuantian.Tang@freescale.com>
> wrote:
> > I am saying your patch 2 probably is wrong, not patch 1.
> 
> Okay, it looked initially that both are screwed up :)
> 
> > Consider following case:
> > On T4240 platform, we have 3 cluster with 8 cpu in each cluster.
> > We offline 4 5 6 7 cpu and then online them back.
> 
> i.e. last four CPUs of first cluster..
> 
> > Cpu 4's policy->cpus is 0, 1, 2, 3, 4
> > Cpu 5's policy->cpus is 0, 1, 2, 3, 4, 5.
> 
> How? And how is this patch going to touch policy->cpus?
> CPU 0-7 should be sharing a single 'struct cpufreq_policy'
> and so policy->cpus should be same for all..

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask'
  2014-09-03  8:02           ` Yuantian Tang
@ 2014-09-03  9:50             ` Viresh Kumar
  2014-09-04  2:51               ` Yuantian Tang
  0 siblings, 1 reply; 25+ messages in thread
From: Viresh Kumar @ 2014-09-03  9:50 UTC (permalink / raw)
  To: Yuantian Tang; +Cc: Rafael Wysocki, linaro-kernel, linux-pm, LeoLi, Hongtao Jia

On 3 September 2014 13:32, Yuantian Tang <Yuantian.Tang@freescale.com> wrote:
> Test confirmed what I am concerned. Patch 1 is working but patch 2 didn't working in hotplug case.
> (tested on t4240 platform with 3 cluster and 8 cpus in each cluster, kernel version is 3.12).

:(

> Log is as follows:
> root@t4240rdb:~# cd /sys/devices/system/
> root@t4240rdb:/sys/devices/system# ls
> clockevents  clocksource  cpu  edac
> root@t4240rdb:/sys/devices/system# cd cpu/
> root@t4240rdb:/sys/devices/system/cpu# ls
> cpu0  cpu1  cpu10  cpu11  cpu12  cpu13  cpu14  cpu15  cpu16  cpu17  cpu18  cpu19  cpu2  cpu20  cpu21  cpu22  cpu23  cpu3  cpu4cpu5  cpu6  cpu7  cpu8  cpu9  kernel_max  offline  online  possible  present  probe  release  uevent
> root@t4240rdb:/sys/devices/system/cpu# cat cpu8/
> altivec_idle            altivec_idle_wait_time  cache/                  cpufreq/                online                  physical_id             pir                     pw20_state              pw20_wait_time          smt_snooze_delay        subsystem/              topology/               uevent
> root@t4240rdb:/sys/devices/system/cpu# cat cpu8/cpufreq/affected_cpus
> 8 9 10 11 12 13 14 15
> root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu8/online
> Cannot set affinity for irq 467
> root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu9/online
> root@t4240rdb:/sys/devices/system/cpu# cat cpu10/cpufreq/affected_cpus
> 10 11 12 13 14 15
> root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu10/online
> root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu11/online
> root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu12/online
> root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu13/online
> root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu14/online
> root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu15/online
> root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu9/online
> smp_85xx_kick_cpu: Can not start CPU #9. Start CPU #8 first.
> smp: failed starting cpu 9 (rc -2)
> -sh: echo: write error: No such file or directory

I don't understand how is this related to my patch? And the successful sequence
below has CPU14 here instead of 8.. So this error might come without my
patch as well if this sequence is followed..

> root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu10/online
> root@t4240rdb:/sys/devices/system/cpu# cat cpu10/cpufreq/affected_cpus
> 10
> root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu11/online
> sysfs: cannot create duplicate filename '/devices/system/cpu/cpu10/cpufreq'
> ------------[ cut here ]------------
> WARNING: at fs/sysfs/dir.c:526

I think I know why this happened.

Does cpu_core_mask() gives mask of ONLY Online CPUs ? I *strongly* believe
YES looking at above crash.

If we can change that with some other routine which gives mask of all
(online+offline)
CPUs then this crash might go away..

Because of that, I believe there is another problem in your driver even without
my patch. Do this to get similar crash:

- build driver as module and don't load it by default
- boot your system
- offline CPUs 8 to 15 as you did above.
- insert module
- try to online cores as you did above.

--
viresh

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask'
  2014-09-03  9:50             ` Viresh Kumar
@ 2014-09-04  2:51               ` Yuantian Tang
  2014-09-04  3:38                 ` Viresh Kumar
  0 siblings, 1 reply; 25+ messages in thread
From: Yuantian Tang @ 2014-09-04  2:51 UTC (permalink / raw)
  To: Viresh Kumar; +Cc: Rafael Wysocki, linaro-kernel, linux-pm, LeoLi, Hongtao Jia

Can't agree you more.
As I know, there is no such function to get mask of online and offline cpus in a cluster.

Thanks,
Yuantian

> -----Original Message-----
> From: Viresh Kumar [mailto:viresh.kumar@linaro.org]
> Sent: Wednesday, September 03, 2014 5:50 PM
> To: Tang Yuantian-B29983
> Cc: Rafael Wysocki; linaro-kernel@lists.linaro.org; linux-pm@vger.kernel.org; Li
> Yang-Leo-R58472; Jia Hongtao-B38951
> Subject: Re: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable
> 'cpu_mask'
> 
> On 3 September 2014 13:32, Yuantian Tang <Yuantian.Tang@freescale.com>
> wrote:
> > Test confirmed what I am concerned. Patch 1 is working but patch 2 didn't
> working in hotplug case.
> > (tested on t4240 platform with 3 cluster and 8 cpus in each cluster, kernel
> version is 3.12).
> 
> :(
> 
> > Log is as follows:
> > root@t4240rdb:~# cd /sys/devices/system/
> > root@t4240rdb:/sys/devices/system# ls clockevents  clocksource  cpu
> > edac root@t4240rdb:/sys/devices/system# cd cpu/
> > root@t4240rdb:/sys/devices/system/cpu# ls
> > cpu0  cpu1  cpu10  cpu11  cpu12  cpu13  cpu14  cpu15  cpu16
> cpu17
> > cpu18  cpu19  cpu2  cpu20  cpu21  cpu22  cpu23  cpu3  cpu4cpu5
> cpu6  cpu7  cpu8  cpu9  kernel_max  offline  online  possible  present
> probe  release  uevent root@t4240rdb:/sys/devices/system/cpu# cat cpu8/
> > altivec_idle            altivec_idle_wait_time  cache/
> cpufreq/                online                  physical_id
> pir                     pw20_state              pw20_wait_time
> smt_snooze_delay        subsystem/              topology/
> uevent
> > root@t4240rdb:/sys/devices/system/cpu# cat cpu8/cpufreq/affected_cpus
> > 8 9 10 11 12 13 14 15
> > root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu8/online Cannot set
> > affinity for irq 467 root@t4240rdb:/sys/devices/system/cpu# echo 0 >
> > cpu9/online root@t4240rdb:/sys/devices/system/cpu# cat
> > cpu10/cpufreq/affected_cpus
> > 10 11 12 13 14 15
> > root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu10/online
> > root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu11/online
> > root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu12/online
> > root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu13/online
> > root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu14/online
> > root@t4240rdb:/sys/devices/system/cpu# echo 0 > cpu15/online
> > root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu9/online
> > smp_85xx_kick_cpu: Can not start CPU #9. Start CPU #8 first.
> > smp: failed starting cpu 9 (rc -2)
> > -sh: echo: write error: No such file or directory
> 
> I don't understand how is this related to my patch? And the successful sequence
> below has CPU14 here instead of 8.. So this error might come without my patch
> as well if this sequence is followed..
> 
> > root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu10/online
> > root@t4240rdb:/sys/devices/system/cpu# cat cpu10/cpufreq/affected_cpus
> > 10
> > root@t4240rdb:/sys/devices/system/cpu# echo 1 > cpu11/online
> > sysfs: cannot create duplicate filename '/devices/system/cpu/cpu10/cpufreq'
> > ------------[ cut here ]------------
> > WARNING: at fs/sysfs/dir.c:526
> 
> I think I know why this happened.
> 
> Does cpu_core_mask() gives mask of ONLY Online CPUs ? I *strongly* believe YES
> looking at above crash.
> 
> If we can change that with some other routine which gives mask of all
> (online+offline)
> CPUs then this crash might go away..
> 
> Because of that, I believe there is another problem in your driver even without my
> patch. Do this to get similar crash:
> 
> - build driver as module and don't load it by default
> - boot your system
> - offline CPUs 8 to 15 as you did above.
> - insert module
> - try to online cores as you did above.
> 
> --
> viresh

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask'
  2014-09-04  2:51               ` Yuantian Tang
@ 2014-09-04  3:38                 ` Viresh Kumar
  2014-09-04  4:33                   ` Yuantian Tang
  0 siblings, 1 reply; 25+ messages in thread
From: Viresh Kumar @ 2014-09-04  3:38 UTC (permalink / raw)
  To: Yuantian Tang; +Cc: Rafael Wysocki, linaro-kernel, linux-pm, LeoLi, Hongtao Jia

On 4 September 2014 08:21, Yuantian Tang <Yuantian.Tang@freescale.com> wrote:
> Can't agree you more.

Its not about upstreaming my patch but doing the right thing.

> As I know, there is no such function to get mask of online and offline cpus in a cluster.

This is a primary requirement of cpufreq-subsystem. The ->init()
callback must initialize
policy->cpus to the mask of all online+offline CPUs that are sharing clock.

Okay, then what I can say is your driver is already broken (without my
patch) when we
use it as a module.

>> - build driver as module and don't load it by default
>> - boot your system
>> - offline CPUs 8 to 15 as you did above.
>> - insert module
>> - try to online cores as you did above.

Above sequence of events would reproduce similar crash for you. So, either
fix this problem or mark your module as 'bool' instead of 'tristate' so that it
can't be compiled as module anymore.

--
viresh

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask'
  2014-09-04  3:38                 ` Viresh Kumar
@ 2014-09-04  4:33                   ` Yuantian Tang
  2014-09-04  4:37                     ` Viresh Kumar
  0 siblings, 1 reply; 25+ messages in thread
From: Yuantian Tang @ 2014-09-04  4:33 UTC (permalink / raw)
  To: Viresh Kumar; +Cc: Rafael Wysocki, linaro-kernel, linux-pm, LeoLi, Hongtao Jia


> -----Original Message-----
> From: Viresh Kumar [mailto:viresh.kumar@linaro.org]
> Sent: Thursday, September 04, 2014 11:38 AM
> To: Tang Yuantian-B29983
> Cc: Rafael Wysocki; linaro-kernel@lists.linaro.org; linux-pm@vger.kernel.org; Li
> Yang-Leo-R58472; Jia Hongtao-B38951
> Subject: Re: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable
> 'cpu_mask'
> 
> On 4 September 2014 08:21, Yuantian Tang <Yuantian.Tang@freescale.com>
> wrote:
> > Can't agree you more.
> 
> Its not about upstreaming my patch but doing the right thing.
> 
> > As I know, there is no such function to get mask of online and offline cpus in a
> cluster.
> 
> This is a primary requirement of cpufreq-subsystem. The ->init() callback must
> initialize
> policy->cpus to the mask of all online+offline CPUs that are sharing clock.
> 
> Okay, then what I can say is your driver is already broken (without my
> patch) when we
> use it as a module.
> 
> >> - build driver as module and don't load it by default
> >> - boot your system
> >> - offline CPUs 8 to 15 as you did above.
> >> - insert module
> >> - try to online cores as you did above.
> 
> Above sequence of events would reproduce similar crash for you. So, either fix
> this problem or mark your module as 'bool' instead of 'tristate' so that it can't be
> compiled as module anymore.
> 

What about the other arch? Can they deal with this situation?

Thanks,
Yuantian
> --
> viresh

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask'
  2014-09-04  4:33                   ` Yuantian Tang
@ 2014-09-04  4:37                     ` Viresh Kumar
  2014-09-05  9:35                       ` Yuantian Tang
  0 siblings, 1 reply; 25+ messages in thread
From: Viresh Kumar @ 2014-09-04  4:37 UTC (permalink / raw)
  To: Yuantian Tang; +Cc: Rafael Wysocki, linaro-kernel, linux-pm, LeoLi, Hongtao Jia

On 4 September 2014 10:03, Yuantian Tang <Yuantian.Tang@freescale.com> wrote:
> What about the other arch? Can they deal with this situation?

Yes for most of them (speacially ARM), because they fulfill the
requirement of CPUFreq
core. i.e. set policy->cpus to all online+offline CPUs.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask'
  2014-09-04  4:37                     ` Viresh Kumar
@ 2014-09-05  9:35                       ` Yuantian Tang
  2014-09-05  9:40                         ` Viresh Kumar
  0 siblings, 1 reply; 25+ messages in thread
From: Yuantian Tang @ 2014-09-05  9:35 UTC (permalink / raw)
  To: Viresh Kumar; +Cc: Rafael Wysocki, linaro-kernel, linux-pm, LeoLi, Hongtao Jia


> -----Original Message-----
> From: Viresh Kumar [mailto:viresh.kumar@linaro.org]
> Sent: Thursday, September 04, 2014 12:37 PM
> To: Tang Yuantian-B29983
> Cc: Rafael Wysocki; linaro-kernel@lists.linaro.org; linux-pm@vger.kernel.org; Li
> Yang-Leo-R58472; Jia Hongtao-B38951
> Subject: Re: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable
> 'cpu_mask'
> 
> On 4 September 2014 10:03, Yuantian Tang <Yuantian.Tang@freescale.com>
> wrote:
> > What about the other arch? Can they deal with this situation?
> 
> Yes for most of them (speacially ARM), because they fulfill the requirement of
> CPUFreq core. i.e. set policy->cpus to all online+offline CPUs.

What's the name of function on ARM arch which get the mask of all online and offline cpus?

We want this driver to be used on ARM as well. So how to write the Kconfig, 
as you know we already have a CONFIG_* item in Kconfig.powerpc.

Thanks,
Yuantian

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask'
  2014-09-05  9:35                       ` Yuantian Tang
@ 2014-09-05  9:40                         ` Viresh Kumar
  0 siblings, 0 replies; 25+ messages in thread
From: Viresh Kumar @ 2014-09-05  9:40 UTC (permalink / raw)
  To: Yuantian Tang; +Cc: Rafael Wysocki, linaro-kernel, linux-pm, LeoLi, Hongtao Jia

On 5 September 2014 15:05, Yuantian Tang <Yuantian.Tang@freescale.com> wrote:
> What's the name of function on ARM arch which get the mask of all online and offline cpus?

You can have a look at arm_big_little.c for this, its topology_core_cpumask().

Also, this might not work for everybody and so we are trying to get this
information from DT, but it is taking some time to get the bindings correctly.

> We want this driver to be used on ARM as well. So how to write the Kconfig,
> as you know we already have a CONFIG_* item in Kconfig.powerpc.

Just duplicate them.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data
  2014-09-02  3:41 [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data Viresh Kumar
  2014-09-02  3:41 ` [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask' Viresh Kumar
  2014-09-02  6:38 ` [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data Yuantian Tang
@ 2014-09-10  4:32 ` Viresh Kumar
  2014-09-10  5:30   ` Yuantian Tang
  2014-09-10  6:20 ` Yuantian Tang
  2014-09-24 23:45 ` Rafael J. Wysocki
  4 siblings, 1 reply; 25+ messages in thread
From: Viresh Kumar @ 2014-09-10  4:32 UTC (permalink / raw)
  To: Rafael Wysocki, Yuantian Tang
  Cc: Lists linaro-kernel, linux-pm, Hongbo Zhang, Li Yang, Viresh Kumar

On 2 September 2014 09:11, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> 'cpu_data' is updated for policy->cpu first and then for all CPUs in
> policy->cpus. policy->cpus is guaranteed to contain policy->cpu as well and so
> the first write to 'cpu_data' for policy->cpu is redundant. Remove it.
>
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> ---
> Hi Yuantian,
>
> I was looking into this driver due to issues reported by Hongtao (cc'd) and
> found that we can live without some code. These aren't fixing any bugs and are
> just cleanups.
>
> I didn't had a compiler for this and so this isn't even compiled. It would be
> great if you can please review/test these patches.
>
>  drivers/cpufreq/ppc-corenet-cpufreq.c | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/drivers/cpufreq/ppc-corenet-cpufreq.c b/drivers/cpufreq/ppc-corenet-cpufreq.c
> index 3607070..bee5df7 100644
> --- a/drivers/cpufreq/ppc-corenet-cpufreq.c
> +++ b/drivers/cpufreq/ppc-corenet-cpufreq.c
> @@ -199,7 +199,6 @@ static int corenet_cpufreq_cpu_init(struct cpufreq_policy *policy)
>         }
>
>         data->table = table;
> -       per_cpu(cpu_data, cpu) = data;
>
>         /* update ->cpus if we have cluster, no harm if not */
>         cpumask_copy(policy->cpus, per_cpu(cpu_mask, cpu));

Hi Yuantian,

Can we get your Ack for this patch atleast so that Rafael can apply it ?

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data
  2014-09-10  4:32 ` Viresh Kumar
@ 2014-09-10  5:30   ` Yuantian Tang
  2014-09-10  5:39     ` Viresh Kumar
  0 siblings, 1 reply; 25+ messages in thread
From: Yuantian Tang @ 2014-09-10  5:30 UTC (permalink / raw)
  To: Viresh Kumar, Rafael Wysocki
  Cc: Lists linaro-kernel, linux-pm, Hongbo Zhang, LeoLi


> -----Original Message-----
> From: Viresh Kumar [mailto:viresh.kumar@linaro.org]
> Sent: Wednesday, September 10, 2014 12:33 PM
> To: Rafael Wysocki; Tang Yuantian-B29983
> Cc: Lists linaro-kernel; linux-pm@vger.kernel.org; Zhang Hongbo-B45939; Li
> Yang-Leo-R58472; Viresh Kumar
> Subject: Re: [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of
> cpu_data
> 
> On 2 September 2014 09:11, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> > 'cpu_data' is updated for policy->cpu first and then for all CPUs in
> > policy->cpus. policy->cpus is guaranteed to contain policy->cpu as
> > policy->well and so
> > the first write to 'cpu_data' for policy->cpu is redundant. Remove it.
> >
> > Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> > ---
> > Hi Yuantian,
> >
> > I was looking into this driver due to issues reported by Hongtao
> > (cc'd) and found that we can live without some code. These aren't
> > fixing any bugs and are just cleanups.
> >
> > I didn't had a compiler for this and so this isn't even compiled. It
> > would be great if you can please review/test these patches.
> >
> >  drivers/cpufreq/ppc-corenet-cpufreq.c | 1 -
> >  1 file changed, 1 deletion(-)
> >
> > diff --git a/drivers/cpufreq/ppc-corenet-cpufreq.c
> > b/drivers/cpufreq/ppc-corenet-cpufreq.c
> > index 3607070..bee5df7 100644
> > --- a/drivers/cpufreq/ppc-corenet-cpufreq.c
> > +++ b/drivers/cpufreq/ppc-corenet-cpufreq.c
> > @@ -199,7 +199,6 @@ static int corenet_cpufreq_cpu_init(struct
> cpufreq_policy *policy)
> >         }
> >
> >         data->table = table;
> > -       per_cpu(cpu_data, cpu) = data;
> >
> >         /* update ->cpus if we have cluster, no harm if not */
> >         cpumask_copy(policy->cpus, per_cpu(cpu_mask, cpu));
> 
> Hi Yuantian,
> 
> Can we get your Ack for this patch atleast so that Rafael can apply it ?
Yeah, your patch should be ok, but is it the best solution?
I was thinking about a final solution just like ARM did.

Thanks,
Yuantian

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data
  2014-09-10  5:30   ` Yuantian Tang
@ 2014-09-10  5:39     ` Viresh Kumar
  2014-09-10  6:19       ` Yuantian Tang
  0 siblings, 1 reply; 25+ messages in thread
From: Viresh Kumar @ 2014-09-10  5:39 UTC (permalink / raw)
  To: Yuantian Tang
  Cc: Rafael Wysocki, Lists linaro-kernel, linux-pm, Hongbo Zhang, LeoLi

On 10 September 2014 11:00, Yuantian Tang <Yuantian.Tang@freescale.com> wrote:
>> On 2 September 2014 09:11, Viresh Kumar <viresh.kumar@linaro.org> wrote:
>> > 'cpu_data' is updated for policy->cpu first and then for all CPUs in
>> > policy->cpus. policy->cpus is guaranteed to contain policy->cpu as
>> > policy->well and so
>> > the first write to 'cpu_data' for policy->cpu is redundant. Remove it.

> Yeah, your patch should be ok, but is it the best solution?
> I was thinking about a final solution just like ARM did.

I was talking about an Ack for just 1/2 where we are doing some cleanup,
which shouldn't break anything.

Regarding the second patch, yes you should come up with something that
can give a list of online+offline CPUs like ARM and even some ppc drivers.
We can handle that separately and this patch can be commited by Rafael..

Either you can give a replacement patch for 2/2 or you can let me know what
to do and will help there.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data
  2014-09-10  5:39     ` Viresh Kumar
@ 2014-09-10  6:19       ` Yuantian Tang
  0 siblings, 0 replies; 25+ messages in thread
From: Yuantian Tang @ 2014-09-10  6:19 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: Rafael Wysocki, Lists linaro-kernel, linux-pm, Hongbo Zhang, LeoLi

Ok, sure.

Thanks,
Yuantian

> -----Original Message-----
> From: Viresh Kumar [mailto:viresh.kumar@linaro.org]
> Sent: Wednesday, September 10, 2014 1:39 PM
> To: Tang Yuantian-B29983
> Cc: Rafael Wysocki; Lists linaro-kernel; linux-pm@vger.kernel.org; Zhang
> Hongbo-B45939; Li Yang-Leo-R58472
> Subject: Re: [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of
> cpu_data
> 
> On 10 September 2014 11:00, Yuantian Tang <Yuantian.Tang@freescale.com>
> wrote:
> >> On 2 September 2014 09:11, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> >> > 'cpu_data' is updated for policy->cpu first and then for all CPUs
> >> > in
> >> > policy->cpus. policy->cpus is guaranteed to contain policy->cpu as
> >> > policy->well and so
> >> > the first write to 'cpu_data' for policy->cpu is redundant. Remove it.
> 
> > Yeah, your patch should be ok, but is it the best solution?
> > I was thinking about a final solution just like ARM did.
> 
> I was talking about an Ack for just 1/2 where we are doing some cleanup, which
> shouldn't break anything.
> 
> Regarding the second patch, yes you should come up with something that can
> give a list of online+offline CPUs like ARM and even some ppc drivers.
> We can handle that separately and this patch can be commited by Rafael..
> 
> Either you can give a replacement patch for 2/2 or you can let me know what to
> do and will help there.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data
  2014-09-02  3:41 [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data Viresh Kumar
                   ` (2 preceding siblings ...)
  2014-09-10  4:32 ` Viresh Kumar
@ 2014-09-10  6:20 ` Yuantian Tang
  2014-09-24 23:45 ` Rafael J. Wysocki
  4 siblings, 0 replies; 25+ messages in thread
From: Yuantian Tang @ 2014-09-10  6:20 UTC (permalink / raw)
  To: Viresh Kumar, Rafael Wysocki; +Cc: linaro-kernel, linux-pm, Hongbo Zhang, LeoLi


> -----Original Message-----
> From: Viresh Kumar [mailto:viresh.kumar@linaro.org]
> Sent: Tuesday, September 02, 2014 11:41 AM
> To: Rafael Wysocki; Tang Yuantian-B29983
> Cc: linaro-kernel@lists.linaro.org; linux-pm@vger.kernel.org; Zhang
> Hongbo-B45939; Li Yang-Leo-R58472; Viresh Kumar
> Subject: [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data
> 
> 'cpu_data' is updated for policy->cpu first and then for all CPUs in
> policy->cpus. policy->cpus is guaranteed to contain policy->cpu as well
> policy->and so
> the first write to 'cpu_data' for policy->cpu is redundant. Remove it.
> 
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> ---
> Hi Yuantian,
> 
> I was looking into this driver due to issues reported by Hongtao (cc'd) and found
> that we can live without some code. These aren't fixing any bugs and are just
> cleanups.
> 
> I didn't had a compiler for this and so this isn't even compiled. It would be great if
> you can please review/test these patches.
> 
>  drivers/cpufreq/ppc-corenet-cpufreq.c | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/drivers/cpufreq/ppc-corenet-cpufreq.c
> b/drivers/cpufreq/ppc-corenet-cpufreq.c
> index 3607070..bee5df7 100644
> --- a/drivers/cpufreq/ppc-corenet-cpufreq.c
> +++ b/drivers/cpufreq/ppc-corenet-cpufreq.c
> @@ -199,7 +199,6 @@ static int corenet_cpufreq_cpu_init(struct cpufreq_policy
> *policy)
>  	}
> 
>  	data->table = table;
> -	per_cpu(cpu_data, cpu) = data;
> 
>  	/* update ->cpus if we have cluster, no harm if not */
>  	cpumask_copy(policy->cpus, per_cpu(cpu_mask, cpu));
> --
> 2.0.3.693.g996b0fd

Acked-by: Tang Yuantian <Yuantian.Tang@freescale.com>

Regards,
Yuantian

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data
  2014-09-02  3:41 [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data Viresh Kumar
                   ` (3 preceding siblings ...)
  2014-09-10  6:20 ` Yuantian Tang
@ 2014-09-24 23:45 ` Rafael J. Wysocki
  2014-09-29  9:05   ` Viresh Kumar
  4 siblings, 1 reply; 25+ messages in thread
From: Rafael J. Wysocki @ 2014-09-24 23:45 UTC (permalink / raw)
  To: Viresh Kumar; +Cc: yuantian.tang, linaro-kernel, linux-pm, hongbo.zhang, leoli

On Tuesday, September 02, 2014 09:11:24 AM Viresh Kumar wrote:
> 'cpu_data' is updated for policy->cpu first and then for all CPUs in
> policy->cpus. policy->cpus is guaranteed to contain policy->cpu as well and so
> the first write to 'cpu_data' for policy->cpu is redundant. Remove it.
> 
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>

I'm not sure about the status here?

> ---
> Hi Yuantian,
> 
> I was looking into this driver due to issues reported by Hongtao (cc'd) and
> found that we can live without some code. These aren't fixing any bugs and are
> just cleanups.
> 
> I didn't had a compiler for this and so this isn't even compiled. It would be
> great if you can please review/test these patches.
> 
>  drivers/cpufreq/ppc-corenet-cpufreq.c | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/drivers/cpufreq/ppc-corenet-cpufreq.c b/drivers/cpufreq/ppc-corenet-cpufreq.c
> index 3607070..bee5df7 100644
> --- a/drivers/cpufreq/ppc-corenet-cpufreq.c
> +++ b/drivers/cpufreq/ppc-corenet-cpufreq.c
> @@ -199,7 +199,6 @@ static int corenet_cpufreq_cpu_init(struct cpufreq_policy *policy)
>  	}
>  
>  	data->table = table;
> -	per_cpu(cpu_data, cpu) = data;
>  
>  	/* update ->cpus if we have cluster, no harm if not */
>  	cpumask_copy(policy->cpus, per_cpu(cpu_mask, cpu));
> 

-- 
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data
  2014-09-24 23:45 ` Rafael J. Wysocki
@ 2014-09-29  9:05   ` Viresh Kumar
  2014-09-29 23:41     ` Rafael J. Wysocki
  0 siblings, 1 reply; 25+ messages in thread
From: Viresh Kumar @ 2014-09-29  9:05 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Yuantian Tang, Lists linaro-kernel, linux-pm, Hongbo Zhang, Li Yang

On 24 September 2014 16:45, Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> On Tuesday, September 02, 2014 09:11:24 AM Viresh Kumar wrote:
>> 'cpu_data' is updated for policy->cpu first and then for all CPUs in
>> policy->cpus. policy->cpus is guaranteed to contain policy->cpu as well and so
>> the first write to 'cpu_data' for policy->cpu is redundant. Remove it.
>>
>> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
>
> I'm not sure about the status here?

Apply only the first patch (1/2) with

Acked-by: Tang Yuantian <Yuantian.Tang@freescale.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data
  2014-09-29  9:05   ` Viresh Kumar
@ 2014-09-29 23:41     ` Rafael J. Wysocki
  0 siblings, 0 replies; 25+ messages in thread
From: Rafael J. Wysocki @ 2014-09-29 23:41 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: Yuantian Tang, Lists linaro-kernel, linux-pm, Hongbo Zhang, Li Yang

On Monday, September 29, 2014 02:05:15 AM Viresh Kumar wrote:
> On 24 September 2014 16:45, Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> > On Tuesday, September 02, 2014 09:11:24 AM Viresh Kumar wrote:
> >> 'cpu_data' is updated for policy->cpu first and then for all CPUs in
> >> policy->cpus. policy->cpus is guaranteed to contain policy->cpu as well and so
> >> the first write to 'cpu_data' for policy->cpu is redundant. Remove it.
> >>
> >> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> >
> > I'm not sure about the status here?
> 
> Apply only the first patch (1/2) with
> 
> Acked-by: Tang Yuantian <Yuantian.Tang@freescale.com>

Please resend with the ACK.

-- 
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2014-09-29 23:21 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-02  3:41 [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data Viresh Kumar
2014-09-02  3:41 ` [PATCH 2/2] cpufreq: ppc-corenet: remove per-cpu variable 'cpu_mask' Viresh Kumar
2014-09-02  6:46   ` Yuantian Tang
2014-09-02  6:49     ` Viresh Kumar
2014-09-02  7:02       ` Yuantian Tang
2014-09-02  7:09         ` Viresh Kumar
2014-09-02  8:08           ` Yuantian Tang
2014-09-03  8:02           ` Yuantian Tang
2014-09-03  9:50             ` Viresh Kumar
2014-09-04  2:51               ` Yuantian Tang
2014-09-04  3:38                 ` Viresh Kumar
2014-09-04  4:33                   ` Yuantian Tang
2014-09-04  4:37                     ` Viresh Kumar
2014-09-05  9:35                       ` Yuantian Tang
2014-09-05  9:40                         ` Viresh Kumar
2014-09-02  6:38 ` [PATCH 1/2] cpufreq: ppc-corenet: remove duplicate update of cpu_data Yuantian Tang
2014-09-02  6:47   ` Viresh Kumar
2014-09-10  4:32 ` Viresh Kumar
2014-09-10  5:30   ` Yuantian Tang
2014-09-10  5:39     ` Viresh Kumar
2014-09-10  6:19       ` Yuantian Tang
2014-09-10  6:20 ` Yuantian Tang
2014-09-24 23:45 ` Rafael J. Wysocki
2014-09-29  9:05   ` Viresh Kumar
2014-09-29 23:41     ` Rafael J. Wysocki

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.