linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC/RFT 0/6] MCPM refactoring and major backend simplification
@ 2015-03-18 18:04 Nicolas Pitre
  2015-03-18 18:04 ` [PATCH RFC/RFT 1/6] MCPM: move the algorithmic complexity to the core code Nicolas Pitre
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Nicolas Pitre @ 2015-03-18 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

After 3 years in the field and new platforms using MCPM, it is time to
look at what can be improved.  It has become obvious that most backends
are reimplementing the same overall code structure which ought to be better
abstracted into the core code.

Things can be simplified greatly by moving all the locking handling to the
core code, leaving machine specific backend only with simple and straight
forward small operations performed under lock protection. It will no longer
be necessary for backend authors to know about all the MCPM subtleties for
their code to be "right". The low-level MCPM locking primitives could also
become private to the MCPM core which is a plus for maintenance.

The diffstat also shows a clear benefit.  Despite the addition of new
documentation, the number of removed lines is significant, especially for
backend code.

To avoid a flag day, the new scheme is introduced in parallel to the
existing backend interface. I would like for this series to hit mainline
during the next merge window, and those new backends also queued for the
next merge window could be merged in parallel even if they are not
converted yet. Then, during the next cycle those backends could be
converted over and the backward compatibility removed.

The hisi04 backend, though, is instead converted to raw SMP operations
as it currently doesn't benefit from MCPM at all and doesn't fit well
with the new backend structure.

This series can also be obtained from the following Git repository:

	http://git.linaro.org/people/nicolas.pitre/linux.git mcpm

Everything was compile tested, however runtime testing only happened
on TC2.  Reviews and test results are appreciated.

 arch/arm/common/mcpm_entry.c       | 202 +++++++++++++++++----
 arch/arm/include/asm/mcpm.h        |  65 ++++++-
 arch/arm/mach-exynos/mcpm-exynos.c | 246 ++++++--------------------
 arch/arm/mach-hisi/platmcpm.c      | 127 +++++---------
 arch/arm/mach-vexpress/dcscb.c     | 197 +++++++--------------
 arch/arm/mach-vexpress/tc2_pm.c    | 291 +++++++++----------------------
 6 files changed, 474 insertions(+), 654 deletions(-)


Nicolas

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH RFC/RFT 1/6] MCPM: move the algorithmic complexity to the core code
  2015-03-18 18:04 [PATCH RFC/RFT 0/6] MCPM refactoring and major backend simplification Nicolas Pitre
@ 2015-03-18 18:04 ` Nicolas Pitre
  2015-03-18 18:04 ` [PATCH RFC/RFT 2/6] ARM: vexpress: migrate TC2 to the new MCPM backend abstraction Nicolas Pitre
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Nicolas Pitre @ 2015-03-18 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

All backends are reimplementing a variation of the same CPU reference
count handling. They are also responsible for driving the MCPM special
low-level locking. This is needless duplication, involving algorithmic
requirements that are not necessarily obvious to the uninitiated.
And from past code review experience, those were all initially
implemented badly.

After 3 years, it is time to refactor as much common code to the core
MCPM facility to make the backends as simple as possible.  To avoid a
flag day, the new scheme is introduced in parallel to the existing
backend interface.  When all backends are converted over, the
compatibility interface could be removed.

The new MCPM backend interface implements simpler methods addressing
very platform specific tasks performed under lock protection while
keeping the algorithmic complexity and race avoidance local to the
core code.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
---
 arch/arm/common/mcpm_entry.c | 202 ++++++++++++++++++++++++++++++++++++-------
 arch/arm/include/asm/mcpm.h  |  65 +++++++++++++-
 2 files changed, 233 insertions(+), 34 deletions(-)

diff --git a/arch/arm/common/mcpm_entry.c b/arch/arm/common/mcpm_entry.c
index 3c165fc2dc..5f8a52ac7e 100644
--- a/arch/arm/common/mcpm_entry.c
+++ b/arch/arm/common/mcpm_entry.c
@@ -55,22 +55,81 @@ bool mcpm_is_available(void)
 	return (platform_ops) ? true : false;
 }
 
+/*
+ * We can't use regular spinlocks. In the switcher case, it is possible
+ * for an outbound CPU to call power_down() after its inbound counterpart
+ * is already live using the same logical CPU number which trips lockdep
+ * debugging.
+ */
+static arch_spinlock_t mcpm_lock = __ARCH_SPIN_LOCK_UNLOCKED;
+
+static int mcpm_cpu_use_count[MAX_NR_CLUSTERS][MAX_CPUS_PER_CLUSTER];
+
+static inline bool mcpm_cluster_unused(unsigned int cluster)
+{
+	int i, cnt;
+	for (i = 0, cnt = 0; i < MAX_CPUS_PER_CLUSTER; i++)
+		cnt |= mcpm_cpu_use_count[cluster][i];
+	return !cnt;
+}
+
 int mcpm_cpu_power_up(unsigned int cpu, unsigned int cluster)
 {
+	bool cpu_is_down, cluster_is_down;
+	int ret = 0;
+
 	if (!platform_ops)
 		return -EUNATCH; /* try not to shadow power_up errors */
 	might_sleep();
-	return platform_ops->power_up(cpu, cluster);
+
+	/* backward compatibility callback */
+	if (platform_ops->power_up)
+		return platform_ops->power_up(cpu, cluster);
+
+	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
+
+	/*
+	 * Since this is called with IRQs enabled, and no arch_spin_lock_irq
+	 * variant exists, we need to disable IRQs manually here.
+	 */
+	local_irq_disable();
+	arch_spin_lock(&mcpm_lock);
+
+	cpu_is_down = !mcpm_cpu_use_count[cluster][cpu];
+	cluster_is_down = mcpm_cluster_unused(cluster);
+
+	mcpm_cpu_use_count[cluster][cpu]++;
+	/*
+	 * The only possible values are:
+	 * 0 = CPU down
+	 * 1 = CPU (still) up
+	 * 2 = CPU requested to be up before it had a chance
+	 *     to actually make itself down.
+	 * Any other value is a bug.
+	 */
+	BUG_ON(mcpm_cpu_use_count[cluster][cpu] != 1 &&
+	       mcpm_cpu_use_count[cluster][cpu] != 2);
+
+	if (cluster_is_down)
+		ret = platform_ops->cluster_powerup(cluster);
+	if (cpu_is_down && !ret)
+		ret = platform_ops->cpu_powerup(cpu, cluster);
+
+	arch_spin_unlock(&mcpm_lock);
+	local_irq_enable();
+	return ret;
 }
 
 typedef void (*phys_reset_t)(unsigned long);
 
 void mcpm_cpu_power_down(void)
 {
+	unsigned int mpidr, cpu, cluster;
+	bool cpu_going_down, last_man;
 	phys_reset_t phys_reset;
 
-	if (WARN_ON_ONCE(!platform_ops || !platform_ops->power_down))
-		return;
+	if (WARN_ON_ONCE(!platform_ops))
+	       return;
 	BUG_ON(!irqs_disabled());
 
 	/*
@@ -79,28 +138,65 @@ void mcpm_cpu_power_down(void)
 	 */
 	setup_mm_for_reboot();
 
-	platform_ops->power_down();
+	/* backward compatibility callback */
+	if (platform_ops->power_down) {
+		platform_ops->power_down();
+		goto not_dead;
+	}
+
+	mpidr = read_cpuid_mpidr();
+	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
+	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
+	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
+
+	__mcpm_cpu_going_down(cpu, cluster);
 
+	arch_spin_lock(&mcpm_lock);
+	BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
+
+	mcpm_cpu_use_count[cluster][cpu]--;
+	BUG_ON(mcpm_cpu_use_count[cluster][cpu] != 0 &&
+	       mcpm_cpu_use_count[cluster][cpu] != 1);
+	cpu_going_down = !mcpm_cpu_use_count[cluster][cpu];
+	last_man = mcpm_cluster_unused(cluster);
+
+	if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) {
+		platform_ops->cpu_powerdown_prepare(cpu, cluster);
+		platform_ops->cluster_powerdown_prepare(cluster);
+		arch_spin_unlock(&mcpm_lock);
+		platform_ops->cluster_cache_disable();
+		__mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN);
+	} else {
+		if (cpu_going_down)
+			platform_ops->cpu_powerdown_prepare(cpu, cluster);
+		arch_spin_unlock(&mcpm_lock);
+		/*
+		 * If cpu_going_down is false here, that means a power_up
+		 * request raced ahead of us.  Even if we do not want to
+		 * shut this CPU down, the caller still expects execution
+		 * to return through the system resume entry path, like
+		 * when the WFI is aborted due to a new IRQ or the like..
+		 * So let's continue with cache cleaning in all cases.
+		 */
+		platform_ops->cpu_cache_disable();
+	}
+
+	__mcpm_cpu_down(cpu, cluster);
+
+	/* Now we are prepared for power-down, do it: */
+	if (cpu_going_down)
+		wfi();
+
+not_dead:
 	/*
 	 * It is possible for a power_up request to happen concurrently
 	 * with a power_down request for the same CPU. In this case the
-	 * power_down method might not be able to actually enter a
-	 * powered down state with the WFI instruction if the power_up
-	 * method has removed the required reset condition.  The
-	 * power_down method is then allowed to return. We must perform
-	 * a re-entry in the kernel as if the power_up method just had
-	 * deasserted reset on the CPU.
-	 *
-	 * To simplify race issues, the platform specific implementation
-	 * must accommodate for the possibility of unordered calls to
-	 * power_down and power_up with a usage count. Therefore, if a
-	 * call to power_up is issued for a CPU that is not down, then
-	 * the next call to power_down must not attempt a full shutdown
-	 * but only do the minimum (normally disabling L1 cache and CPU
-	 * coherency) and return just as if a concurrent power_up request
-	 * had happened as described above.
+	 * CPU might not be able to actually enter a powered down state
+	 * with the WFI instruction if the power_up request has removed
+	 * the required reset condition.  We must perform a re-entry in
+	 * the kernel as if the power_up method just had deasserted reset
+	 * on the CPU.
 	 */
-
 	phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset);
 	phys_reset(virt_to_phys(mcpm_entry_point));
 
@@ -125,26 +221,66 @@ int mcpm_wait_for_cpu_powerdown(unsigned int cpu, unsigned int cluster)
 
 void mcpm_cpu_suspend(u64 expected_residency)
 {
-	phys_reset_t phys_reset;
-
-	if (WARN_ON_ONCE(!platform_ops || !platform_ops->suspend))
+	if (WARN_ON_ONCE(!platform_ops))
 		return;
-	BUG_ON(!irqs_disabled());
 
-	/* Very similar to mcpm_cpu_power_down() */
-	setup_mm_for_reboot();
-	platform_ops->suspend(expected_residency);
-	phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset);
-	phys_reset(virt_to_phys(mcpm_entry_point));
-	BUG();
+	/* backward compatibility callback */
+	if (platform_ops->suspend) {
+		phys_reset_t phys_reset;
+		BUG_ON(!irqs_disabled());
+		setup_mm_for_reboot();
+		platform_ops->suspend(expected_residency);
+		phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset);
+		phys_reset(virt_to_phys(mcpm_entry_point));
+		BUG();
+	}
+
+	/* Some platforms might have to enable special resume modes, etc. */
+	if (platform_ops->cpu_suspend_prepare) {
+		unsigned int mpidr = read_cpuid_mpidr();
+		unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
+		unsigned int cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); 
+		arch_spin_lock(&mcpm_lock);
+		platform_ops->cpu_suspend_prepare(cpu, cluster);
+		arch_spin_unlock(&mcpm_lock);
+	}
+	mcpm_cpu_power_down();
 }
 
 int mcpm_cpu_powered_up(void)
 {
+	unsigned int mpidr, cpu, cluster;
+	bool cpu_was_down, first_man;
+	unsigned long flags;
+
 	if (!platform_ops)
 		return -EUNATCH;
-	if (platform_ops->powered_up)
+
+	/* backward compatibility callback */
+	if (platform_ops->powered_up) {
 		platform_ops->powered_up();
+		return 0;
+	}
+
+	mpidr = read_cpuid_mpidr();
+	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
+	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
+	local_irq_save(flags);
+	arch_spin_lock(&mcpm_lock);
+
+	cpu_was_down = !mcpm_cpu_use_count[cluster][cpu];
+	first_man = mcpm_cluster_unused(cluster);
+
+	if (first_man && platform_ops->cluster_is_up)
+		platform_ops->cluster_is_up(cluster);
+	if (cpu_was_down)
+		mcpm_cpu_use_count[cluster][cpu] = 1;
+	if (platform_ops->cpu_is_up)
+		platform_ops->cpu_is_up(cpu, cluster);
+
+	arch_spin_unlock(&mcpm_lock);
+	local_irq_restore(flags);
+
 	return 0;
 }
 
@@ -334,8 +470,10 @@ int __init mcpm_sync_init(
 	}
 	mpidr = read_cpuid_mpidr();
 	this_cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
-	for_each_online_cpu(i)
+	for_each_online_cpu(i) {
+		mcpm_cpu_use_count[this_cluster][i] = 1;
 		mcpm_sync.clusters[this_cluster].cpus[i].cpu = CPU_UP;
+	}
 	mcpm_sync.clusters[this_cluster].cluster = CLUSTER_UP;
 	sync_cache_w(&mcpm_sync);
 
diff --git a/arch/arm/include/asm/mcpm.h b/arch/arm/include/asm/mcpm.h
index 3446f6a1d9..50b378f59e 100644
--- a/arch/arm/include/asm/mcpm.h
+++ b/arch/arm/include/asm/mcpm.h
@@ -171,12 +171,73 @@ void mcpm_cpu_suspend(u64 expected_residency);
 int mcpm_cpu_powered_up(void);
 
 /*
- * Platform specific methods used in the implementation of the above API.
+ * Platform specific callbacks used in the implementation of the above API.
+ *
+ * cpu_powerup:
+ * Make given CPU runable. Called with MCPM lock held and IRQs disabled.
+ * The given cluster is assumed to be set up (cluster_powerup would have
+ * been called beforehand). Must return 0 for success or negative error code.
+ *
+ * cluster_powerup:
+ * Set up power for given cluster. Called with MCPM lock held and IRQs
+ * disabled. Called before first cpu_powerup when cluster is down. Must
+ * return 0 for success or negative error code.
+ *
+ * cpu_suspend_prepare:
+ * Special suspend configuration. Called on target CPU with MCPM lock held
+ * and IRQs disabled. This callback is optional. If provided, it is called
+ * before cpu_powerdown_prepare.
+ *
+ * cpu_powerdown_prepare:
+ * Configure given CPU for power down. Called on target CPU with MCPM lock
+ * held and IRQs disabled. Power down must be effective only at the next WFI instruction.
+ *
+ * cluster_powerdown_prepare:
+ * Configure given cluster for power down. Called on one CPU from target
+ * cluster with MCPM lock held and IRQs disabled. A cpu_powerdown_prepare
+ * for each CPU in the cluster has happened when this occurs.
+ *
+ * cpu_cache_disable:
+ * Clean and disable CPU level cache for the calling CPU. Called on with IRQs
+ * disabled only. The CPU is no longer cache coherent with the rest of the
+ * system when this returns.
+ *
+ * cluster_cache_disable:
+ * Clean and disable the cluster wide cache as well as the CPU level cache
+ * for the calling CPU. No call to cpu_cache_disable will happen for this
+ * CPU. Called with IRQs disabled and only when all the other CPUs are done
+ * with their own cpu_cache_disable. The cluster is no longer cache coherent
+ * with the rest of the system when this returns.
+ *
+ * cpu_is_up:
+ * Called on given CPU after it has been powered up or resumed. The MCPM lock
+ * is held and IRQs disabled. This callback is optional.
+ *
+ * cluster_is_up:
+ * Called by the first CPU to be powered up or resumed in given cluster.
+ * The MCPM lock is held and IRQs disabled. This callback is optional. If
+ * provided, it is called before cpu_is_up for that CPU.
+ *
+ * wait_for_powerdown:
+ * Wait until given CPU is powered down. This is called in sleeping context.
+ * Some reasonable timeout must be considered. Must return 0 for success or
+ * negative error code.
  */
 struct mcpm_platform_ops {
+	int (*cpu_powerup)(unsigned int cpu, unsigned int cluster);
+	int (*cluster_powerup)(unsigned int cluster);
+	void (*cpu_suspend_prepare)(unsigned int cpu, unsigned int cluster);
+	void (*cpu_powerdown_prepare)(unsigned int cpu, unsigned int cluster);
+	void (*cluster_powerdown_prepare)(unsigned int cluster);
+	void (*cpu_cache_disable)(void);
+	void (*cluster_cache_disable)(void);
+	void (*cpu_is_up)(unsigned int cpu, unsigned int cluster);
+	void (*cluster_is_up)(unsigned int cluster);
+	int (*wait_for_powerdown)(unsigned int cpu, unsigned int cluster);
+
+	/* deprecated callbacks */
 	int (*power_up)(unsigned int cpu, unsigned int cluster);
 	void (*power_down)(void);
-	int (*wait_for_powerdown)(unsigned int cpu, unsigned int cluster);
 	void (*suspend)(u64);
 	void (*powered_up)(void);
 };
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RFC/RFT 2/6] ARM: vexpress: migrate TC2 to the new MCPM backend abstraction
  2015-03-18 18:04 [PATCH RFC/RFT 0/6] MCPM refactoring and major backend simplification Nicolas Pitre
  2015-03-18 18:04 ` [PATCH RFC/RFT 1/6] MCPM: move the algorithmic complexity to the core code Nicolas Pitre
@ 2015-03-18 18:04 ` Nicolas Pitre
  2015-03-18 18:04 ` [PATCH RFC/RFT 3/6] ARM: vexpress: DCSCB: tighten CPU validity assertion Nicolas Pitre
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Nicolas Pitre @ 2015-03-18 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Nicolas Pitre <nico@linaro.org>
---
 arch/arm/mach-vexpress/tc2_pm.c | 291 +++++++++++-----------------------------
 1 file changed, 81 insertions(+), 210 deletions(-)

diff --git a/arch/arm/mach-vexpress/tc2_pm.c b/arch/arm/mach-vexpress/tc2_pm.c
index 2fb78b4648..b3328cd46c 100644
--- a/arch/arm/mach-vexpress/tc2_pm.c
+++ b/arch/arm/mach-vexpress/tc2_pm.c
@@ -18,7 +18,6 @@
 #include <linux/kernel.h>
 #include <linux/of_address.h>
 #include <linux/of_irq.h>
-#include <linux/spinlock.h>
 #include <linux/errno.h>
 #include <linux/irqchip/arm-gic.h>
 
@@ -44,101 +43,36 @@
 
 static void __iomem *scc;
 
-/*
- * We can't use regular spinlocks. In the switcher case, it is possible
- * for an outbound CPU to call power_down() after its inbound counterpart
- * is already live using the same logical CPU number which trips lockdep
- * debugging.
- */
-static arch_spinlock_t tc2_pm_lock = __ARCH_SPIN_LOCK_UNLOCKED;
-
 #define TC2_CLUSTERS			2
 #define TC2_MAX_CPUS_PER_CLUSTER	3
 
 static unsigned int tc2_nr_cpus[TC2_CLUSTERS];
 
-/* Keep per-cpu usage count to cope with unordered up/down requests */
-static int tc2_pm_use_count[TC2_MAX_CPUS_PER_CLUSTER][TC2_CLUSTERS];
-
-#define tc2_cluster_unused(cluster) \
-	(!tc2_pm_use_count[0][cluster] && \
-	 !tc2_pm_use_count[1][cluster] && \
-	 !tc2_pm_use_count[2][cluster])
-
-static int tc2_pm_power_up(unsigned int cpu, unsigned int cluster)
+static int tc2_pm_cpu_powerup(unsigned int cpu, unsigned int cluster)
 {
 	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
 	if (cluster >= TC2_CLUSTERS || cpu >= tc2_nr_cpus[cluster])
 		return -EINVAL;
-
-	/*
-	 * Since this is called with IRQs enabled, and no arch_spin_lock_irq
-	 * variant exists, we need to disable IRQs manually here.
-	 */
-	local_irq_disable();
-	arch_spin_lock(&tc2_pm_lock);
-
-	if (tc2_cluster_unused(cluster))
-		ve_spc_powerdown(cluster, false);
-
-	tc2_pm_use_count[cpu][cluster]++;
-	if (tc2_pm_use_count[cpu][cluster] == 1) {
-		ve_spc_set_resume_addr(cluster, cpu,
-				       virt_to_phys(mcpm_entry_point));
-		ve_spc_cpu_wakeup_irq(cluster, cpu, true);
-	} else if (tc2_pm_use_count[cpu][cluster] != 2) {
-		/*
-		 * The only possible values are:
-		 * 0 = CPU down
-		 * 1 = CPU (still) up
-		 * 2 = CPU requested to be up before it had a chance
-		 *     to actually make itself down.
-		 * Any other value is a bug.
-		 */
-		BUG();
-	}
-
-	arch_spin_unlock(&tc2_pm_lock);
-	local_irq_enable();
-
+	ve_spc_set_resume_addr(cluster, cpu,
+			       virt_to_phys(mcpm_entry_point));
+	ve_spc_cpu_wakeup_irq(cluster, cpu, true);
 	return 0;
 }
 
-static void tc2_pm_down(u64 residency)
+static int tc2_pm_cluster_powerup(unsigned int cluster)
 {
-	unsigned int mpidr, cpu, cluster;
-	bool last_man = false, skip_wfi = false;
-
-	mpidr = read_cpuid_mpidr();
-	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
-	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
+	pr_debug("%s: cluster %u\n", __func__, cluster);
+	if (cluster >= TC2_CLUSTERS)
+		return -EINVAL;
+	ve_spc_powerdown(cluster, false);
+	return 0;
+}
 
+static void tc2_pm_cpu_powerdown_prepare(unsigned int cpu, unsigned int cluster)
+{
 	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
 	BUG_ON(cluster >= TC2_CLUSTERS || cpu >= TC2_MAX_CPUS_PER_CLUSTER);
-
-	__mcpm_cpu_going_down(cpu, cluster);
-
-	arch_spin_lock(&tc2_pm_lock);
-	BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
-	tc2_pm_use_count[cpu][cluster]--;
-	if (tc2_pm_use_count[cpu][cluster] == 0) {
-		ve_spc_cpu_wakeup_irq(cluster, cpu, true);
-		if (tc2_cluster_unused(cluster)) {
-			ve_spc_powerdown(cluster, true);
-			ve_spc_global_wakeup_irq(true);
-			last_man = true;
-		}
-	} else if (tc2_pm_use_count[cpu][cluster] == 1) {
-		/*
-		 * A power_up request went ahead of us.
-		 * Even if we do not want to shut this CPU down,
-		 * the caller expects a certain state as if the WFI
-		 * was aborted.  So let's continue with cache cleaning.
-		 */
-		skip_wfi = true;
-	} else
-		BUG();
-
+	ve_spc_cpu_wakeup_irq(cluster, cpu, true);
 	/*
 	 * If the CPU is committed to power down, make sure
 	 * the power controller will be in charge of waking it
@@ -146,55 +80,38 @@ static void tc2_pm_down(u64 residency)
 	 * to the CPU by disabling the GIC CPU IF to prevent wfi
 	 * from completing execution behind power controller back
 	 */
-	if (!skip_wfi)
-		gic_cpu_if_down();
-
-	if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) {
-		arch_spin_unlock(&tc2_pm_lock);
-
-		if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A15) {
-			/*
-			 * On the Cortex-A15 we need to disable
-			 * L2 prefetching before flushing the cache.
-			 */
-			asm volatile(
-			"mcr	p15, 1, %0, c15, c0, 3 \n\t"
-			"isb	\n\t"
-			"dsb	"
-			: : "r" (0x400) );
-		}
-
-		v7_exit_coherency_flush(all);
-
-		cci_disable_port_by_cpu(mpidr);
-
-		__mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN);
-	} else {
-		/*
-		 * If last man then undo any setup done previously.
-		 */
-		if (last_man) {
-			ve_spc_powerdown(cluster, false);
-			ve_spc_global_wakeup_irq(false);
-		}
-
-		arch_spin_unlock(&tc2_pm_lock);
-
-		v7_exit_coherency_flush(louis);
-	}
-
-	__mcpm_cpu_down(cpu, cluster);
+	gic_cpu_if_down();
+}
 
-	/* Now we are prepared for power-down, do it: */
-	if (!skip_wfi)
-		wfi();
+static void tc2_pm_cluster_powerdown_prepare(unsigned int cluster)
+{
+	pr_debug("%s: cluster %u\n", __func__, cluster);
+	BUG_ON(cluster >= TC2_CLUSTERS);
+	ve_spc_powerdown(cluster, true);
+	ve_spc_global_wakeup_irq(true);
+}
 
-	/* Not dead at this point?  Let our caller cope. */
+static void tc2_pm_cpu_cache_disable(void)
+{
+	v7_exit_coherency_flush(louis);
 }
 
-static void tc2_pm_power_down(void)
+static void tc2_pm_cluster_cache_disable(void)
 {
-	tc2_pm_down(0);
+	if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A15) {
+		/*
+		 * On the Cortex-A15 we need to disable
+		 * L2 prefetching before flushing the cache.
+		 */
+		asm volatile(
+		"mcr	p15, 1, %0, c15, c0, 3 \n\t"
+		"isb	\n\t"
+		"dsb	"
+		: : "r" (0x400) );
+	}
+
+	v7_exit_coherency_flush(all);
+	cci_disable_port_by_cpu(read_cpuid_mpidr());
 }
 
 static int tc2_core_in_reset(unsigned int cpu, unsigned int cluster)
@@ -217,27 +134,21 @@ static int tc2_pm_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
 	BUG_ON(cluster >= TC2_CLUSTERS || cpu >= TC2_MAX_CPUS_PER_CLUSTER);
 
 	for (tries = 0; tries < TIMEOUT_MSEC / POLL_MSEC; ++tries) {
+		pr_debug("%s(cpu=%u, cluster=%u): RESET_CTRL = 0x%08X\n",
+			 __func__, cpu, cluster,
+			 readl_relaxed(scc + RESET_CTRL));
+
 		/*
-		 * Only examine the hardware state if the target CPU has
-		 * caught up@least as far as tc2_pm_down():
+		 * We need the CPU to reach WFI, but the power
+		 * controller may put the cluster in reset and
+		 * power it off as soon as that happens, before
+		 * we have a chance to see STANDBYWFI.
+		 *
+		 * So we need to check for both conditions:
 		 */
-		if (ACCESS_ONCE(tc2_pm_use_count[cpu][cluster]) == 0) {
-			pr_debug("%s(cpu=%u, cluster=%u): RESET_CTRL = 0x%08X\n",
-				 __func__, cpu, cluster,
-				 readl_relaxed(scc + RESET_CTRL));
-
-			/*
-			 * We need the CPU to reach WFI, but the power
-			 * controller may put the cluster in reset and
-			 * power it off as soon as that happens, before
-			 * we have a chance to see STANDBYWFI.
-			 *
-			 * So we need to check for both conditions:
-			 */
-			if (tc2_core_in_reset(cpu, cluster) ||
-			    ve_spc_cpu_in_wfi(cpu, cluster))
-				return 0; /* success: the CPU is halted */
-		}
+		if (tc2_core_in_reset(cpu, cluster) ||
+		    ve_spc_cpu_in_wfi(cpu, cluster))
+			return 0; /* success: the CPU is halted */
 
 		/* Otherwise, wait and retry: */
 		msleep(POLL_MSEC);
@@ -246,72 +157,40 @@ static int tc2_pm_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
 	return -ETIMEDOUT; /* timeout */
 }
 
-static void tc2_pm_suspend(u64 residency)
+static void tc2_pm_cpu_suspend_prepare(unsigned int cpu, unsigned int cluster)
 {
-	unsigned int mpidr, cpu, cluster;
-
-	mpidr = read_cpuid_mpidr();
-	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
-	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
 	ve_spc_set_resume_addr(cluster, cpu, virt_to_phys(mcpm_entry_point));
-	tc2_pm_down(residency);
 }
 
-static void tc2_pm_powered_up(void)
+static void tc2_pm_cpu_is_up(unsigned int cpu, unsigned int cluster)
 {
-	unsigned int mpidr, cpu, cluster;
-	unsigned long flags;
-
-	mpidr = read_cpuid_mpidr();
-	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
-	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
-
 	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
 	BUG_ON(cluster >= TC2_CLUSTERS || cpu >= TC2_MAX_CPUS_PER_CLUSTER);
-
-	local_irq_save(flags);
-	arch_spin_lock(&tc2_pm_lock);
-
-	if (tc2_cluster_unused(cluster)) {
-		ve_spc_powerdown(cluster, false);
-		ve_spc_global_wakeup_irq(false);
-	}
-
-	if (!tc2_pm_use_count[cpu][cluster])
-		tc2_pm_use_count[cpu][cluster] = 1;
-
 	ve_spc_cpu_wakeup_irq(cluster, cpu, false);
 	ve_spc_set_resume_addr(cluster, cpu, 0);
+}
 
-	arch_spin_unlock(&tc2_pm_lock);
-	local_irq_restore(flags);
+static void tc2_pm_cluster_is_up(unsigned int cluster)
+{
+	pr_debug("%s: cluster %u\n", __func__, cluster);
+	BUG_ON(cluster >= TC2_CLUSTERS);
+	ve_spc_powerdown(cluster, false);
+	ve_spc_global_wakeup_irq(false);
 }
 
 static const struct mcpm_platform_ops tc2_pm_power_ops = {
-	.power_up		= tc2_pm_power_up,
-	.power_down		= tc2_pm_power_down,
+	.cpu_powerup		= tc2_pm_cpu_powerup,
+	.cluster_powerup	= tc2_pm_cluster_powerup,
+	.cpu_suspend_prepare	= tc2_pm_cpu_suspend_prepare,
+	.cpu_powerdown_prepare	= tc2_pm_cpu_powerdown_prepare,
+	.cluster_powerdown_prepare = tc2_pm_cluster_powerdown_prepare,
+	.cpu_cache_disable	= tc2_pm_cpu_cache_disable,
+	.cluster_cache_disable	= tc2_pm_cluster_cache_disable,
 	.wait_for_powerdown	= tc2_pm_wait_for_powerdown,
-	.suspend		= tc2_pm_suspend,
-	.powered_up		= tc2_pm_powered_up,
+	.cpu_is_up		= tc2_pm_cpu_is_up,
+	.cluster_is_up		= tc2_pm_cluster_is_up,
 };
 
-static bool __init tc2_pm_usage_count_init(void)
-{
-	unsigned int mpidr, cpu, cluster;
-
-	mpidr = read_cpuid_mpidr();
-	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
-	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
-
-	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
-	if (cluster >= TC2_CLUSTERS || cpu >= tc2_nr_cpus[cluster]) {
-		pr_err("%s: boot CPU is out of bound!\n", __func__);
-		return false;
-	}
-	tc2_pm_use_count[cpu][cluster] = 1;
-	return true;
-}
-
 /*
  * Enable cluster-level coherency, in preparation for turning on the MMU.
  */
@@ -323,23 +202,9 @@ static void __naked tc2_pm_power_up_setup(unsigned int affinity_level)
 "	b	cci_enable_port_for_self ");
 }
 
-static void __init tc2_cache_off(void)
-{
-	pr_info("TC2: disabling cache during MCPM loopback test\n");
-	if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A15) {
-		/* disable L2 prefetching on the Cortex-A15 */
-		asm volatile(
-		"mcr	p15, 1, %0, c15, c0, 3 \n\t"
-		"isb	\n\t"
-		"dsb	"
-		: : "r" (0x400) );
-	}
-	v7_exit_coherency_flush(all);
-	cci_disable_port_by_cpu(read_cpuid_mpidr());
-}
-
 static int __init tc2_pm_init(void)
 {
+	unsigned int mpidr, cpu, cluster;
 	int ret, irq;
 	u32 a15_cluster_id, a7_cluster_id, sys_info;
 	struct device_node *np;
@@ -379,14 +244,20 @@ static int __init tc2_pm_init(void)
 	if (!cci_probed())
 		return -ENODEV;
 
-	if (!tc2_pm_usage_count_init())
+	mpidr = read_cpuid_mpidr();
+	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
+	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
+	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
+	if (cluster >= TC2_CLUSTERS || cpu >= tc2_nr_cpus[cluster]) {
+		pr_err("%s: boot CPU is out of bound!\n", __func__);
 		return -EINVAL;
+	}
 
 	ret = mcpm_platform_register(&tc2_pm_power_ops);
 	if (!ret) {
 		mcpm_sync_init(tc2_pm_power_up_setup);
 		/* test if we can (re)enable the CCI on our own */
-		BUG_ON(mcpm_loopback(tc2_cache_off) != 0);
+		BUG_ON(mcpm_loopback(tc2_pm_cluster_cache_disable) != 0);
 		pr_info("TC2 power management initialized\n");
 	}
 	return ret;
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RFC/RFT 3/6] ARM: vexpress: DCSCB: tighten CPU validity assertion
  2015-03-18 18:04 [PATCH RFC/RFT 0/6] MCPM refactoring and major backend simplification Nicolas Pitre
  2015-03-18 18:04 ` [PATCH RFC/RFT 1/6] MCPM: move the algorithmic complexity to the core code Nicolas Pitre
  2015-03-18 18:04 ` [PATCH RFC/RFT 2/6] ARM: vexpress: migrate TC2 to the new MCPM backend abstraction Nicolas Pitre
@ 2015-03-18 18:04 ` Nicolas Pitre
  2015-03-18 18:04 ` [PATCH RFC/RFT 4/6] ARM: vexpress: migrate DCSCB to the new MCPM backend abstraction Nicolas Pitre
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Nicolas Pitre @ 2015-03-18 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

Currently the cpu argument validity check uses a hardcoded limit of 4.
The DCSCB configuration data provides the actual number of CPUs and
we already use it elsewhere.  Let's improve the cpu argument validity
check by using that information instead.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
---
 arch/arm/mach-vexpress/dcscb.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm/mach-vexpress/dcscb.c b/arch/arm/mach-vexpress/dcscb.c
index 30b993399e..12c74734cd 100644
--- a/arch/arm/mach-vexpress/dcscb.c
+++ b/arch/arm/mach-vexpress/dcscb.c
@@ -54,7 +54,7 @@ static int dcscb_power_up(unsigned int cpu, unsigned int cluster)
 	unsigned int all_mask;
 
 	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
-	if (cpu >= 4 || cluster >= 2)
+	if (cluster >= 2 || !(cpumask & dcscb_allcpus_mask[cluster]))
 		return -EINVAL;
 
 	all_mask = dcscb_allcpus_mask[cluster];
@@ -105,7 +105,7 @@ static void dcscb_power_down(void)
 	cpumask = (1 << cpu);
 
 	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
-	BUG_ON(cpu >= 4 || cluster >= 2);
+	BUG_ON(cluster >= 2 || !(cpumask & dcscb_allcpus_mask[cluster]));
 
 	all_mask = dcscb_allcpus_mask[cluster];
 
@@ -189,7 +189,7 @@ static void __init dcscb_usage_count_init(void)
 	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
 
 	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
-	BUG_ON(cpu >= 4 || cluster >= 2);
+	BUG_ON(cluster >= 2 || !((1 << cpu) & dcscb_allcpus_mask[cluster]));
 	dcscb_use_count[cpu][cluster] = 1;
 }
 
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RFC/RFT 4/6] ARM: vexpress: migrate DCSCB to the new MCPM backend abstraction
  2015-03-18 18:04 [PATCH RFC/RFT 0/6] MCPM refactoring and major backend simplification Nicolas Pitre
                   ` (2 preceding siblings ...)
  2015-03-18 18:04 ` [PATCH RFC/RFT 3/6] ARM: vexpress: DCSCB: tighten CPU validity assertion Nicolas Pitre
@ 2015-03-18 18:04 ` Nicolas Pitre
  2015-03-18 18:04 ` [PATCH RFC/RFT 5/6] ARM: Exynos: " Nicolas Pitre
  2015-03-18 18:04 ` [PATCH RFC/RFT 6/6] ARM: hisi/hip04: remove the MCPM overhead Nicolas Pitre
  5 siblings, 0 replies; 10+ messages in thread
From: Nicolas Pitre @ 2015-03-18 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Nicolas Pitre <nico@linaro.org>
---
 arch/arm/mach-vexpress/dcscb.c | 195 ++++++++++++++---------------------------
 1 file changed, 66 insertions(+), 129 deletions(-)

diff --git a/arch/arm/mach-vexpress/dcscb.c b/arch/arm/mach-vexpress/dcscb.c
index 12c74734cd..5cedcf5721 100644
--- a/arch/arm/mach-vexpress/dcscb.c
+++ b/arch/arm/mach-vexpress/dcscb.c
@@ -12,7 +12,6 @@
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/io.h>
-#include <linux/spinlock.h>
 #include <linux/errno.h>
 #include <linux/of_address.h>
 #include <linux/vexpress.h>
@@ -36,163 +35,102 @@
 #define KFC_CFG_W	0x2c
 #define DCS_CFG_R	0x30
 
-/*
- * We can't use regular spinlocks. In the switcher case, it is possible
- * for an outbound CPU to call power_down() while its inbound counterpart
- * is already live using the same logical CPU number which trips lockdep
- * debugging.
- */
-static arch_spinlock_t dcscb_lock = __ARCH_SPIN_LOCK_UNLOCKED;
-
 static void __iomem *dcscb_base;
-static int dcscb_use_count[4][2];
 static int dcscb_allcpus_mask[2];
 
-static int dcscb_power_up(unsigned int cpu, unsigned int cluster)
+static int dcscb_cpu_powerup(unsigned int cpu, unsigned int cluster)
 {
 	unsigned int rst_hold, cpumask = (1 << cpu);
-	unsigned int all_mask;
 
 	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
 	if (cluster >= 2 || !(cpumask & dcscb_allcpus_mask[cluster]))
 		return -EINVAL;
 
-	all_mask = dcscb_allcpus_mask[cluster];
+	rst_hold = readl_relaxed(dcscb_base + RST_HOLD0 + cluster * 4);
+	rst_hold &= ~(cpumask | (cpumask << 4));
+	writel_relaxed(rst_hold, dcscb_base + RST_HOLD0 + cluster * 4);
+	return 0;
+}
 
-	/*
-	 * Since this is called with IRQs enabled, and no arch_spin_lock_irq
-	 * variant exists, we need to disable IRQs manually here.
-	 */
-	local_irq_disable();
-	arch_spin_lock(&dcscb_lock);
-
-	dcscb_use_count[cpu][cluster]++;
-	if (dcscb_use_count[cpu][cluster] == 1) {
-		rst_hold = readl_relaxed(dcscb_base + RST_HOLD0 + cluster * 4);
-		if (rst_hold & (1 << 8)) {
-			/* remove cluster reset and add individual CPU's reset */
-			rst_hold &= ~(1 << 8);
-			rst_hold |= all_mask;
-		}
-		rst_hold &= ~(cpumask | (cpumask << 4));
-		writel_relaxed(rst_hold, dcscb_base + RST_HOLD0 + cluster * 4);
-	} else if (dcscb_use_count[cpu][cluster] != 2) {
-		/*
-		 * The only possible values are:
-		 * 0 = CPU down
-		 * 1 = CPU (still) up
-		 * 2 = CPU requested to be up before it had a chance
-		 *     to actually make itself down.
-		 * Any other value is a bug.
-		 */
-		BUG();
-	}
+static int dcscb_cluster_powerup(unsigned int cluster)
+{
+	unsigned int rst_hold;
 
-	arch_spin_unlock(&dcscb_lock);
-	local_irq_enable();
+	pr_debug("%s: cluster %u\n", __func__, cluster);
+	if (cluster >= 2)
+		return -EINVAL;
 
+	/* remove cluster reset and add individual CPU's reset */
+	rst_hold = readl_relaxed(dcscb_base + RST_HOLD0 + cluster * 4);
+	rst_hold &= ~(1 << 8);
+	rst_hold |= dcscb_allcpus_mask[cluster];
+	writel_relaxed(rst_hold, dcscb_base + RST_HOLD0 + cluster * 4);
 	return 0;
 }
 
-static void dcscb_power_down(void)
+static void dcscb_cpu_powerdown_prepare(unsigned int cpu, unsigned int cluster)
 {
-	unsigned int mpidr, cpu, cluster, rst_hold, cpumask, all_mask;
-	bool last_man = false, skip_wfi = false;
-
-	mpidr = read_cpuid_mpidr();
-	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
-	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
-	cpumask = (1 << cpu);
+	unsigned int rst_hold;
 
 	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
-	BUG_ON(cluster >= 2 || !(cpumask & dcscb_allcpus_mask[cluster]));
-
-	all_mask = dcscb_allcpus_mask[cluster];
-
-	__mcpm_cpu_going_down(cpu, cluster);
-
-	arch_spin_lock(&dcscb_lock);
-	BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
-	dcscb_use_count[cpu][cluster]--;
-	if (dcscb_use_count[cpu][cluster] == 0) {
-		rst_hold = readl_relaxed(dcscb_base + RST_HOLD0 + cluster * 4);
-		rst_hold |= cpumask;
-		if (((rst_hold | (rst_hold >> 4)) & all_mask) == all_mask) {
-			rst_hold |= (1 << 8);
-			last_man = true;
-		}
-		writel_relaxed(rst_hold, dcscb_base + RST_HOLD0 + cluster * 4);
-	} else if (dcscb_use_count[cpu][cluster] == 1) {
-		/*
-		 * A power_up request went ahead of us.
-		 * Even if we do not want to shut this CPU down,
-		 * the caller expects a certain state as if the WFI
-		 * was aborted.  So let's continue with cache cleaning.
-		 */
-		skip_wfi = true;
-	} else
-		BUG();
-
-	if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) {
-		arch_spin_unlock(&dcscb_lock);
-
-		/* Flush all cache levels for this cluster. */
-		v7_exit_coherency_flush(all);
-
-		/*
-		 * A full outer cache flush could be needed@this point
-		 * on platforms with such a cache, depending on where the
-		 * outer cache sits. In some cases the notion of a "last
-		 * cluster standing" would need to be implemented if the
-		 * outer cache is shared across clusters. In any case, when
-		 * the outer cache needs flushing, there is no concurrent
-		 * access to the cache controller to worry about and no
-		 * special locking besides what is already provided by the
-		 * MCPM state machinery is needed.
-		 */
-
-		/*
-		 * Disable cluster-level coherency by masking
-		 * incoming snoops and DVM messages:
-		 */
-		cci_disable_port_by_cpu(mpidr);
-
-		__mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN);
-	} else {
-		arch_spin_unlock(&dcscb_lock);
-
-		/* Disable and flush the local CPU cache. */
-		v7_exit_coherency_flush(louis);
-	}
+	BUG_ON(cluster >= 2 || !((1 << cpu) & dcscb_allcpus_mask[cluster]));
 
-	__mcpm_cpu_down(cpu, cluster);
+	rst_hold = readl_relaxed(dcscb_base + RST_HOLD0 + cluster * 4);
+	rst_hold |= (1 << cpu);
+	writel_relaxed(rst_hold, dcscb_base + RST_HOLD0 + cluster * 4);
+}
 
-	/* Now we are prepared for power-down, do it: */
-	dsb();
-	if (!skip_wfi)
-		wfi();
+static void dcscb_cluster_powerdown_prepare(unsigned int cluster)
+{
+	unsigned int rst_hold;
 
-	/* Not dead@this point?  Let our caller cope. */
+	pr_debug("%s: cluster %u\n", __func__, cluster);
+	BUG_ON(cluster >= 2);
+
+	rst_hold = readl_relaxed(dcscb_base + RST_HOLD0 + cluster * 4);
+	rst_hold |= (1 << 8);
+	writel_relaxed(rst_hold, dcscb_base + RST_HOLD0 + cluster * 4);
 }
 
-static const struct mcpm_platform_ops dcscb_power_ops = {
-	.power_up	= dcscb_power_up,
-	.power_down	= dcscb_power_down,
-};
+static void dcscb_cpu_cache_disable(void)
+{
+	/* Disable and flush the local CPU cache. */
+	v7_exit_coherency_flush(louis);
+}
 
-static void __init dcscb_usage_count_init(void)
+static void dcscb_cluster_cache_disable(void)
 {
-	unsigned int mpidr, cpu, cluster;
+	/* Flush all cache levels for this cluster. */
+	v7_exit_coherency_flush(all);
 
-	mpidr = read_cpuid_mpidr();
-	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
-	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
+	/*
+	 * A full outer cache flush could be needed@this point
+	 * on platforms with such a cache, depending on where the
+	 * outer cache sits. In some cases the notion of a "last
+	 * cluster standing" would need to be implemented if the
+	 * outer cache is shared across clusters. In any case, when
+	 * the outer cache needs flushing, there is no concurrent
+	 * access to the cache controller to worry about and no
+	 * special locking besides what is already provided by the
+	 * MCPM state machinery is needed.
+	 */
 
-	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
-	BUG_ON(cluster >= 2 || !((1 << cpu) & dcscb_allcpus_mask[cluster]));
-	dcscb_use_count[cpu][cluster] = 1;
+	/*
+	 * Disable cluster-level coherency by masking
+	 * incoming snoops and DVM messages:
+	 */
+	cci_disable_port_by_cpu(read_cpuid_mpidr());
 }
 
+static const struct mcpm_platform_ops dcscb_power_ops = {
+	.cpu_powerup		= dcscb_cpu_powerup,
+	.cluster_powerup	= dcscb_cluster_powerup,
+	.cpu_powerdown_prepare	= dcscb_cpu_powerdown_prepare,
+	.cluster_powerdown_prepare = dcscb_cluster_powerdown_prepare,
+	.cpu_cache_disable	= dcscb_cpu_cache_disable,
+	.cluster_cache_disable	= dcscb_cluster_cache_disable,
+};
+
 extern void dcscb_power_up_setup(unsigned int affinity_level);
 
 static int __init dcscb_init(void)
@@ -213,7 +151,6 @@ static int __init dcscb_init(void)
 	cfg = readl_relaxed(dcscb_base + DCS_CFG_R);
 	dcscb_allcpus_mask[0] = (1 << (((cfg >> 16) >> (0 << 2)) & 0xf)) - 1;
 	dcscb_allcpus_mask[1] = (1 << (((cfg >> 16) >> (1 << 2)) & 0xf)) - 1;
-	dcscb_usage_count_init();
 
 	ret = mcpm_platform_register(&dcscb_power_ops);
 	if (!ret)
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RFC/RFT 5/6] ARM: Exynos: migrate DCSCB to the new MCPM backend abstraction
  2015-03-18 18:04 [PATCH RFC/RFT 0/6] MCPM refactoring and major backend simplification Nicolas Pitre
                   ` (3 preceding siblings ...)
  2015-03-18 18:04 ` [PATCH RFC/RFT 4/6] ARM: vexpress: migrate DCSCB to the new MCPM backend abstraction Nicolas Pitre
@ 2015-03-18 18:04 ` Nicolas Pitre
  2015-03-24 23:24   ` Nicolas Pitre
  2015-03-18 18:04 ` [PATCH RFC/RFT 6/6] ARM: hisi/hip04: remove the MCPM overhead Nicolas Pitre
  5 siblings, 1 reply; 10+ messages in thread
From: Nicolas Pitre @ 2015-03-18 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

The custom suspend callback is removed for this change. That includes
the dubious call to exynos_cpu_power_up(() that was present at the end
of exynos_suspend().

Signed-off-by: Nicolas Pitre <nico@linaro.org>
---
 arch/arm/mach-exynos/mcpm-exynos.c | 246 ++++++++-----------------------------
 1 file changed, 51 insertions(+), 195 deletions(-)

diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c
index b0d3c2e876..d4bbbfb5fe 100644
--- a/arch/arm/mach-exynos/mcpm-exynos.c
+++ b/arch/arm/mach-exynos/mcpm-exynos.c
@@ -61,25 +61,7 @@ static void __iomem *ns_sram_base_addr;
 	: "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7", \
 	  "r9", "r10", "lr", "memory")
 
-/*
- * We can't use regular spinlocks. In the switcher case, it is possible
- * for an outbound CPU to call power_down() after its inbound counterpart
- * is already live using the same logical CPU number which trips lockdep
- * debugging.
- */
-static arch_spinlock_t exynos_mcpm_lock = __ARCH_SPIN_LOCK_UNLOCKED;
-static int
-cpu_use_count[EXYNOS5420_CPUS_PER_CLUSTER][EXYNOS5420_NR_CLUSTERS];
-
-#define exynos_cluster_usecnt(cluster) \
-	(cpu_use_count[0][cluster] +   \
-	 cpu_use_count[1][cluster] +   \
-	 cpu_use_count[2][cluster] +   \
-	 cpu_use_count[3][cluster])
-
-#define exynos_cluster_unused(cluster) !exynos_cluster_usecnt(cluster)
-
-static int exynos_power_up(unsigned int cpu, unsigned int cluster)
+static int exynos_cpu_powerup(unsigned int cpu, unsigned int cluster)
 {
 	unsigned int cpunr = cpu + (cluster * EXYNOS5420_CPUS_PER_CLUSTER);
 
@@ -88,127 +70,65 @@ static int exynos_power_up(unsigned int cpu, unsigned int cluster)
 		cluster >= EXYNOS5420_NR_CLUSTERS)
 		return -EINVAL;
 
-	/*
-	 * Since this is called with IRQs enabled, and no arch_spin_lock_irq
-	 * variant exists, we need to disable IRQs manually here.
-	 */
-	local_irq_disable();
-	arch_spin_lock(&exynos_mcpm_lock);
-
-	cpu_use_count[cpu][cluster]++;
-	if (cpu_use_count[cpu][cluster] == 1) {
-		bool was_cluster_down =
-			(exynos_cluster_usecnt(cluster) == 1);
-
-		/*
-		 * Turn on the cluster (L2/COMMON) and then power on the
-		 * cores.
-		 */
-		if (was_cluster_down)
-			exynos_cluster_power_up(cluster);
-
-		exynos_cpu_power_up(cpunr);
-	} else if (cpu_use_count[cpu][cluster] != 2) {
-		/*
-		 * The only possible values are:
-		 * 0 = CPU down
-		 * 1 = CPU (still) up
-		 * 2 = CPU requested to be up before it had a chance
-		 *     to actually make itself down.
-		 * Any other value is a bug.
-		 */
-		BUG();
-	}
+	exynos_cpu_power_up(cpunr);
+	return 0;
+}
 
-	arch_spin_unlock(&exynos_mcpm_lock);
-	local_irq_enable();
+static int exynos_cluster_powerup(unsigned int cluster)
+{
+	pr_debug("%s: cluster %u\n", __func__, cluster);
+	if (cluster >= EXYNOS5420_NR_CLUSTERS)
+		return -EINVAL;
 
+	exynos_cluster_power_up(cluster);
 	return 0;
 }
 
-/*
- * NOTE: This function requires the stack data to be visible through power down
- * and can only be executed on processors like A15 and A7 that hit the cache
- * with the C bit clear in the SCTLR register.
- */
-static void exynos_power_down(void)
+static void exynos_cpu_powerdown_prepare(unsigned int cpu, unsigned int cluster)
 {
-	unsigned int mpidr, cpu, cluster;
-	bool last_man = false, skip_wfi = false;
-	unsigned int cpunr;
-
-	mpidr = read_cpuid_mpidr();
-	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
-	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
-	cpunr =  cpu + (cluster * EXYNOS5420_CPUS_PER_CLUSTER);
+	unsigned int cpunr = cpu + (cluster * EXYNOS5420_CPUS_PER_CLUSTER);
 
 	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
 	BUG_ON(cpu >= EXYNOS5420_CPUS_PER_CLUSTER ||
 			cluster >= EXYNOS5420_NR_CLUSTERS);
+	exynos_cpu_power_down(cpunr);
+}
 
-	__mcpm_cpu_going_down(cpu, cluster);
-
-	arch_spin_lock(&exynos_mcpm_lock);
-	BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
-	cpu_use_count[cpu][cluster]--;
-	if (cpu_use_count[cpu][cluster] == 0) {
-		exynos_cpu_power_down(cpunr);
-
-		if (exynos_cluster_unused(cluster)) {
-			exynos_cluster_power_down(cluster);
-			last_man = true;
-		}
-	} else if (cpu_use_count[cpu][cluster] == 1) {
-		/*
-		 * A power_up request went ahead of us.
-		 * Even if we do not want to shut this CPU down,
-		 * the caller expects a certain state as if the WFI
-		 * was aborted.  So let's continue with cache cleaning.
-		 */
-		skip_wfi = true;
-	} else {
-		BUG();
-	}
-
-	if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) {
-		arch_spin_unlock(&exynos_mcpm_lock);
-
-		if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A15) {
-			/*
-			 * On the Cortex-A15 we need to disable
-			 * L2 prefetching before flushing the cache.
-			 */
-			asm volatile(
-			"mcr	p15, 1, %0, c15, c0, 3\n\t"
-			"isb\n\t"
-			"dsb"
-			: : "r" (0x400));
-		}
+static void exynos_cluster_powerdown_prepare(unsigned int cluster)
+{
+	pr_debug("%s: cluster %u\n", __func__, cluster);
+	BUG_ON(cluster >= EXYNOS5420_NR_CLUSTERS);
+	exynos_cluster_power_down(cluster);
+}
 
-		/* Flush all cache levels for this cluster. */
-		exynos_v7_exit_coherency_flush(all);
+static void exynos_cpu_cache_disable(void)
+{
+	/* Disable and flush the local CPU cache. */
+	exynos_v7_exit_coherency_flush(louis);
+}
 
+static void exynos_cluster_cache_disable(void)
+{
+	if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A15) {
 		/*
-		 * Disable cluster-level coherency by masking
-		 * incoming snoops and DVM messages:
+		 * On the Cortex-A15 we need to disable
+		 * L2 prefetching before flushing the cache.
 		 */
-		cci_disable_port_by_cpu(mpidr);
-
-		__mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN);
-	} else {
-		arch_spin_unlock(&exynos_mcpm_lock);
-
-		/* Disable and flush the local CPU cache. */
-		exynos_v7_exit_coherency_flush(louis);
+		asm volatile(
+		"mcr	p15, 1, %0, c15, c0, 3\n\t"
+		"isb\n\t"
+		"dsb"
+		: : "r" (0x400));
 	}
 
-	__mcpm_cpu_down(cpu, cluster);
-
-	/* Now we are prepared for power-down, do it: */
-	if (!skip_wfi)
-		wfi();
+	/* Flush all cache levels for this cluster. */
+	exynos_v7_exit_coherency_flush(all);
 
-	/* Not dead at this point?  Let our caller cope. */
+	/*
+	 * Disable cluster-level coherency by masking
+	 * incoming snoops and DVM messages:
+	 */
+	cci_disable_port_by_cpu(read_cpuid_mpidr());
 }
 
 static int exynos_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
@@ -222,10 +142,8 @@ static int exynos_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
 
 	/* Wait for the core state to be OFF */
 	while (tries--) {
-		if (ACCESS_ONCE(cpu_use_count[cpu][cluster]) == 0) {
-			if ((exynos_cpu_power_state(cpunr) == 0))
-				return 0; /* success: the CPU is halted */
-		}
+		if ((exynos_cpu_power_state(cpunr) == 0))
+			return 0; /* success: the CPU is halted */
 
 		/* Otherwise, wait and retry: */
 		msleep(1);
@@ -234,63 +152,16 @@ static int exynos_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
 	return -ETIMEDOUT; /* timeout */
 }
 
-static void exynos_powered_up(void)
-{
-	unsigned int mpidr, cpu, cluster;
-
-	mpidr = read_cpuid_mpidr();
-	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
-	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
-
-	arch_spin_lock(&exynos_mcpm_lock);
-	if (cpu_use_count[cpu][cluster] == 0)
-		cpu_use_count[cpu][cluster] = 1;
-	arch_spin_unlock(&exynos_mcpm_lock);
-}
-
-static void exynos_suspend(u64 residency)
-{
-	unsigned int mpidr, cpunr;
-
-	exynos_power_down();
-
-	/*
-	 * Execution reaches here only if cpu did not power down.
-	 * Hence roll back the changes done in exynos_power_down function.
-	 *
-	 * CAUTION: "This function requires the stack data to be visible through
-	 * power down and can only be executed on processors like A15 and A7
-	 * that hit the cache with the C bit clear in the SCTLR register."
-	*/
-	mpidr = read_cpuid_mpidr();
-	cpunr = exynos_pmu_cpunr(mpidr);
-
-	exynos_cpu_power_up(cpunr);
-}
-
 static const struct mcpm_platform_ops exynos_power_ops = {
-	.power_up		= exynos_power_up,
-	.power_down		= exynos_power_down,
+	.cpu_powerup		= exynos_cpu_powerup,
+	.cluster_powerup	= exynos_cluster_powerup,
+	.cpu_powerdown_prepare	= exynos_cpu_powerdown_prepare,
+	.cluster_powerdown_prepare = exynos_cluster_powerdown_prepare,
+	.cpu_cache_disable	= exynos_cpu_cache_disable,
+	.cluster_cache_disable	= exynos_cluster_cache_disable,
 	.wait_for_powerdown	= exynos_wait_for_powerdown,
-	.suspend		= exynos_suspend,
-	.powered_up		= exynos_powered_up,
 };
 
-static void __init exynos_mcpm_usage_count_init(void)
-{
-	unsigned int mpidr, cpu, cluster;
-
-	mpidr = read_cpuid_mpidr();
-	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
-	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
-
-	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
-	BUG_ON(cpu >= EXYNOS5420_CPUS_PER_CLUSTER  ||
-			cluster >= EXYNOS5420_NR_CLUSTERS);
-
-	cpu_use_count[cpu][cluster] = 1;
-}
-
 /*
  * Enable cluster-level coherency, in preparation for turning on the MMU.
  */
@@ -302,19 +173,6 @@ static void __naked exynos_pm_power_up_setup(unsigned int affinity_level)
 	"b	cci_enable_port_for_self");
 }
 
-static void __init exynos_cache_off(void)
-{
-	if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A15) {
-		/* disable L2 prefetching on the Cortex-A15 */
-		asm volatile(
-		"mcr	p15, 1, %0, c15, c0, 3\n\t"
-		"isb\n\t"
-		"dsb"
-		: : "r" (0x400));
-	}
-	exynos_v7_exit_coherency_flush(all);
-}
-
 static const struct of_device_id exynos_dt_mcpm_match[] = {
 	{ .compatible = "samsung,exynos5420" },
 	{ .compatible = "samsung,exynos5800" },
@@ -370,13 +228,11 @@ static int __init exynos_mcpm_init(void)
 	 */
 	pmu_raw_writel(EXYNOS5420_SWRESET_KFC_SEL, S5P_PMU_SPARE3);
 
-	exynos_mcpm_usage_count_init();
-
 	ret = mcpm_platform_register(&exynos_power_ops);
 	if (!ret)
 		ret = mcpm_sync_init(exynos_pm_power_up_setup);
 	if (!ret)
-		ret = mcpm_loopback(exynos_cache_off); /* turn on the CCI */
+		ret = mcpm_loopback(exynos_cluster_cache_disable); /* turn on the CCI */
 	if (ret) {
 		iounmap(ns_sram_base_addr);
 		return ret;
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RFC/RFT 6/6] ARM: hisi/hip04: remove the MCPM overhead
  2015-03-18 18:04 [PATCH RFC/RFT 0/6] MCPM refactoring and major backend simplification Nicolas Pitre
                   ` (4 preceding siblings ...)
  2015-03-18 18:04 ` [PATCH RFC/RFT 5/6] ARM: Exynos: " Nicolas Pitre
@ 2015-03-18 18:04 ` Nicolas Pitre
  5 siblings, 0 replies; 10+ messages in thread
From: Nicolas Pitre @ 2015-03-18 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

This platform is currently relying on the MCPM infrastructure for no
apparent reason.  The MCPM concurrency handling brings no benefits here
as there is no asynchronous CPU wake-ups to be concerned about (this is
used for CPU hotplug and secondary boot only, not for CPU idle).

This platform is also different from the other MCPM users because a given
CPU can't shut itself down completely without the assistance of another
CPU. This is at odds with the on-going MCPM backend refactoring.

To simplify things, this is converted to hook directly into the
smp_operations callbacks, bypassing the MCPM infrastructure.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
---
 arch/arm/mach-hisi/platmcpm.c | 127 ++++++++++++++----------------------------
 1 file changed, 42 insertions(+), 85 deletions(-)

diff --git a/arch/arm/mach-hisi/platmcpm.c b/arch/arm/mach-hisi/platmcpm.c
index 280f3f14f7..880cbfa9c3 100644
--- a/arch/arm/mach-hisi/platmcpm.c
+++ b/arch/arm/mach-hisi/platmcpm.c
@@ -6,6 +6,8 @@
  * under the terms and conditions of the GNU General Public License,
  * version 2, as published by the Free Software Foundation.
  */
+#include <linux/init.h>
+#include <linux/smp.h>
 #include <linux/delay.h>
 #include <linux/io.h>
 #include <linux/memblock.h>
@@ -13,7 +15,9 @@
 
 #include <asm/cputype.h>
 #include <asm/cp15.h>
-#include <asm/mcpm.h>
+#include <asm/cacheflush.h>
+#include <asm/smp.h>
+#include <asm/smp_plat.h>
 
 #include "core.h"
 
@@ -94,11 +98,16 @@ static void hip04_set_snoop_filter(unsigned int cluster, unsigned int on)
 	} while (data != readl_relaxed(fabric + FAB_SF_MODE));
 }
 
-static int hip04_mcpm_power_up(unsigned int cpu, unsigned int cluster)
+static int hip04_boot_secondary(unsigned int l_cpu, struct task_struct *idle)
 {
+	unsigned int mpidr, cpu, cluster;
 	unsigned long data;
 	void __iomem *sys_dreq, *sys_status;
 
+	mpidr = cpu_logical_map(l_cpu);
+	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
+	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
+
 	if (!sysctrl)
 		return -ENODEV;
 	if (cluster >= HIP04_MAX_CLUSTERS || cpu >= HIP04_MAX_CPUS_PER_CLUSTER)
@@ -118,6 +127,7 @@ static int hip04_mcpm_power_up(unsigned int cpu, unsigned int cluster)
 			cpu_relax();
 			data = readl_relaxed(sys_status);
 		} while (data & CLUSTER_DEBUG_RESET_STATUS);
+		hip04_set_snoop_filter(cluster, 1);
 	}
 
 	data = CORE_RESET_BIT(cpu) | NEON_RESET_BIT(cpu) | \
@@ -126,11 +136,15 @@ static int hip04_mcpm_power_up(unsigned int cpu, unsigned int cluster)
 	do {
 		cpu_relax();
 	} while (data == readl_relaxed(sys_status));
+
 	/*
 	 * We may fail to power up core again without this delay.
 	 * It's not mentioned in document. It's found by test.
 	 */
 	udelay(20);
+
+	arch_send_wakeup_ipi_mask(cpumask_of(l_cpu));
+
 out:
 	hip04_cpu_table[cluster][cpu]++;
 	spin_unlock_irq(&boot_lock);
@@ -138,31 +152,29 @@ out:
 	return 0;
 }
 
-static void hip04_mcpm_power_down(void)
+static void hip04_cpu_die(unsigned int l_cpu)
 {
 	unsigned int mpidr, cpu, cluster;
-	bool skip_wfi = false, last_man = false;
+	bool last_man;
 
-	mpidr = read_cpuid_mpidr();
+	mpidr = cpu_logical_map(l_cpu);
 	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
 	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
 
-	__mcpm_cpu_going_down(cpu, cluster);
-
 	spin_lock(&boot_lock);
-	BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
 	hip04_cpu_table[cluster][cpu]--;
 	if (hip04_cpu_table[cluster][cpu] == 1) {
 		/* A power_up request went ahead of us. */
-		skip_wfi = true;
+		spin_unlock(&boot_lock);
+		return;
 	} else if (hip04_cpu_table[cluster][cpu] > 1) {
 		pr_err("Cluster %d CPU%d boots multiple times\n", cluster, cpu);
 		BUG();
 	}
 
 	last_man = hip04_cluster_is_down(cluster);
-	if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) {
-		spin_unlock(&boot_lock);
+	spin_unlock(&boot_lock);
+	if (last_man) {
 		/* Since it's Cortex A15, disable L2 prefetching. */
 		asm volatile(
 		"mcr	p15, 1, %0, c15, c0, 3 \n\t"
@@ -170,34 +182,30 @@ static void hip04_mcpm_power_down(void)
 		"dsb	"
 		: : "r" (0x400) );
 		v7_exit_coherency_flush(all);
-		hip04_set_snoop_filter(cluster, 0);
-		__mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN);
 	} else {
-		spin_unlock(&boot_lock);
 		v7_exit_coherency_flush(louis);
 	}
 
-	__mcpm_cpu_down(cpu, cluster);
-
-	if (!skip_wfi)
+	for (;;)
 		wfi();
 }
 
-static int hip04_mcpm_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
+static int hip04_cpu_kill(unsigned int l_cpu)
 {
+	unsigned int mpidr, cpu, cluster;
 	unsigned int data, tries, count;
-	int ret = -ETIMEDOUT;
 
+	mpidr = cpu_logical_map(l_cpu);
+	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
+	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
 	BUG_ON(cluster >= HIP04_MAX_CLUSTERS ||
 	       cpu >= HIP04_MAX_CPUS_PER_CLUSTER);
 
 	count = TIMEOUT_MSEC / POLL_MSEC;
 	spin_lock_irq(&boot_lock);
 	for (tries = 0; tries < count; tries++) {
-		if (hip04_cpu_table[cluster][cpu]) {
-			ret = -EBUSY;
+		if (hip04_cpu_table[cluster][cpu])
 			goto err;
-		}
 		cpu_relax();
 		data = readl_relaxed(sysctrl + SC_CPU_RESET_STATUS(cluster));
 		if (data & CORE_WFI_STATUS(cpu))
@@ -220,64 +228,19 @@ static int hip04_mcpm_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
 	}
 	if (tries >= count)
 		goto err;
+	if (hip04_cluster_is_down(cluster))
+		hip04_set_snoop_filter(cluster, 0);
 	spin_unlock_irq(&boot_lock);
-	return 0;
+	return 1;
 err:
 	spin_unlock_irq(&boot_lock);
-	return ret;
-}
-
-static void hip04_mcpm_powered_up(void)
-{
-	unsigned int mpidr, cpu, cluster;
-
-	mpidr = read_cpuid_mpidr();
-	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
-	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
-
-	spin_lock(&boot_lock);
-	if (!hip04_cpu_table[cluster][cpu])
-		hip04_cpu_table[cluster][cpu] = 1;
-	spin_unlock(&boot_lock);
-}
-
-static void __naked hip04_mcpm_power_up_setup(unsigned int affinity_level)
-{
-	asm volatile ("			\n"
-"	cmp	r0, #0			\n"
-"	bxeq	lr			\n"
-	/* calculate fabric phys address */
-"	adr	r2, 2f			\n"
-"	ldmia	r2, {r1, r3}		\n"
-"	sub	r0, r2, r1		\n"
-"	ldr	r2, [r0, r3]		\n"
-	/* get cluster id from MPIDR */
-"	mrc	p15, 0, r0, c0, c0, 5	\n"
-"	ubfx	r1, r0, #8, #8		\n"
-	/* 1 << cluster id */
-"	mov	r0, #1			\n"
-"	mov	r3, r0, lsl r1		\n"
-"	ldr	r0, [r2, #"__stringify(FAB_SF_MODE)"]	\n"
-"	tst	r0, r3			\n"
-"	bxne	lr			\n"
-"	orr	r1, r0, r3		\n"
-"	str	r1, [r2, #"__stringify(FAB_SF_MODE)"]	\n"
-"1:	ldr	r0, [r2, #"__stringify(FAB_SF_MODE)"]	\n"
-"	tst	r0, r3			\n"
-"	beq	1b			\n"
-"	bx	lr			\n"
-
-"	.align	2			\n"
-"2:	.word	.			\n"
-"	.word	fabric_phys_addr	\n"
-	);
+	return 0;
 }
 
-static const struct mcpm_platform_ops hip04_mcpm_ops = {
-	.power_up		= hip04_mcpm_power_up,
-	.power_down		= hip04_mcpm_power_down,
-	.wait_for_powerdown	= hip04_mcpm_wait_for_powerdown,
-	.powered_up		= hip04_mcpm_powered_up,
+static struct smp_operations __initdata hip04_smp_ops = {
+	.smp_boot_secondary	= hip04_boot_secondary,
+	.cpu_die		= hip04_cpu_die,
+	.cpu_kill		= hip04_cpu_kill,
 };
 
 static bool __init hip04_cpu_table_init(void)
@@ -298,7 +261,7 @@ static bool __init hip04_cpu_table_init(void)
 	return true;
 }
 
-static int __init hip04_mcpm_init(void)
+static int __init hip04_smp_init(void)
 {
 	struct device_node *np, *np_sctl, *np_fab;
 	struct resource fab_res;
@@ -353,10 +316,6 @@ static int __init hip04_mcpm_init(void)
 		ret = -EINVAL;
 		goto err_table;
 	}
-	ret = mcpm_platform_register(&hip04_mcpm_ops);
-	if (ret) {
-		goto err_table;
-	}
 
 	/*
 	 * Fill the instruction address that is used after secondary core
@@ -364,13 +323,11 @@ static int __init hip04_mcpm_init(void)
 	 */
 	writel_relaxed(hip04_boot_method[0], relocation);
 	writel_relaxed(0xa5a5a5a5, relocation + 4);	/* magic number */
-	writel_relaxed(virt_to_phys(mcpm_entry_point), relocation + 8);
+	writel_relaxed(virt_to_phys(secondary_startup), relocation + 8);
 	writel_relaxed(0, relocation + 12);
 	iounmap(relocation);
 
-	mcpm_sync_init(hip04_mcpm_power_up_setup);
-	mcpm_smp_set_ops();
-	pr_info("HiP04 MCPM initialized\n");
+	smp_set_ops(&hip04_smp_ops);
 	return ret;
 err_table:
 	iounmap(fabric);
@@ -383,4 +340,4 @@ err_reloc:
 err:
 	return ret;
 }
-early_initcall(hip04_mcpm_init);
+early_initcall(hip04_smp_init);
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RFC/RFT 5/6] ARM: Exynos: migrate DCSCB to the new MCPM backend abstraction
  2015-03-18 18:04 ` [PATCH RFC/RFT 5/6] ARM: Exynos: " Nicolas Pitre
@ 2015-03-24 23:24   ` Nicolas Pitre
  2015-03-25  9:40     ` Daniel Lezcano
  0 siblings, 1 reply; 10+ messages in thread
From: Nicolas Pitre @ 2015-03-24 23:24 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, 18 Mar 2015, Nicolas Pitre wrote:

> The custom suspend callback is removed for this change. That includes
> the dubious call to exynos_cpu_power_up(() that was present at the end
> of exynos_suspend().

After testing on actual hardware, it turns out that this call is 
important.  This patch is therefore amended with the following:

diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c
index d4bbbfb5fe..9bdf54795f 100644
--- a/arch/arm/mach-exynos/mcpm-exynos.c
+++ b/arch/arm/mach-exynos/mcpm-exynos.c
@@ -152,6 +152,12 @@ static int exynos_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
 	return -ETIMEDOUT; /* timeout */
 }
 
+static void exynos_cpu_is_up(unsigned int cpu, unsigned int cluster)
+{
+	/* especially when resuming: make sure power control is set */
+	exynos_cpu_powerup(cpu, cluster);
+}
+
 static const struct mcpm_platform_ops exynos_power_ops = {
 	.cpu_powerup		= exynos_cpu_powerup,
 	.cluster_powerup	= exynos_cluster_powerup,
@@ -160,6 +166,7 @@ static const struct mcpm_platform_ops exynos_power_ops = {
 	.cpu_cache_disable	= exynos_cpu_cache_disable,
 	.cluster_cache_disable	= exynos_cluster_cache_disable,
 	.wait_for_powerdown	= exynos_wait_for_powerdown,
+	.cpu_is_up		= exynos_cpu_is_up,
 };
 
 /*

The whole commit now appears as follows in my git tree:

commit 0d86b0b4cf869fa48d96bde231b9d04ea68b6422
Author: Nicolas Pitre <nicolas.pitre@linaro.org>
Date:   Mon Mar 16 17:16:07 2015 -0400

    ARM: Exynos: migrate DCSCB to the new MCPM backend abstraction
    
    The custom suspend callback is removed for this change. The extra call
    to exynos_cpu_power_up(() that was present at the end of exynos_suspend()
    is now relocated to the cpu_is_up callback.
    
    Signed-off-by: Nicolas Pitre <nico@linaro.org>

diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c
index b0d3c2e876..9bdf54795f 100644
--- a/arch/arm/mach-exynos/mcpm-exynos.c
+++ b/arch/arm/mach-exynos/mcpm-exynos.c
@@ -61,25 +61,7 @@ static void __iomem *ns_sram_base_addr;
 	: "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7", \
 	  "r9", "r10", "lr", "memory")
 
-/*
- * We can't use regular spinlocks. In the switcher case, it is possible
- * for an outbound CPU to call power_down() after its inbound counterpart
- * is already live using the same logical CPU number which trips lockdep
- * debugging.
- */
-static arch_spinlock_t exynos_mcpm_lock = __ARCH_SPIN_LOCK_UNLOCKED;
-static int
-cpu_use_count[EXYNOS5420_CPUS_PER_CLUSTER][EXYNOS5420_NR_CLUSTERS];
-
-#define exynos_cluster_usecnt(cluster) \
-	(cpu_use_count[0][cluster] +   \
-	 cpu_use_count[1][cluster] +   \
-	 cpu_use_count[2][cluster] +   \
-	 cpu_use_count[3][cluster])
-
-#define exynos_cluster_unused(cluster) !exynos_cluster_usecnt(cluster)
-
-static int exynos_power_up(unsigned int cpu, unsigned int cluster)
+static int exynos_cpu_powerup(unsigned int cpu, unsigned int cluster)
 {
 	unsigned int cpunr = cpu + (cluster * EXYNOS5420_CPUS_PER_CLUSTER);
 
@@ -88,127 +70,65 @@ static int exynos_power_up(unsigned int cpu, unsigned int cluster)
 		cluster >= EXYNOS5420_NR_CLUSTERS)
 		return -EINVAL;
 
-	/*
-	 * Since this is called with IRQs enabled, and no arch_spin_lock_irq
-	 * variant exists, we need to disable IRQs manually here.
-	 */
-	local_irq_disable();
-	arch_spin_lock(&exynos_mcpm_lock);
-
-	cpu_use_count[cpu][cluster]++;
-	if (cpu_use_count[cpu][cluster] == 1) {
-		bool was_cluster_down =
-			(exynos_cluster_usecnt(cluster) == 1);
-
-		/*
-		 * Turn on the cluster (L2/COMMON) and then power on the
-		 * cores.
-		 */
-		if (was_cluster_down)
-			exynos_cluster_power_up(cluster);
-
-		exynos_cpu_power_up(cpunr);
-	} else if (cpu_use_count[cpu][cluster] != 2) {
-		/*
-		 * The only possible values are:
-		 * 0 = CPU down
-		 * 1 = CPU (still) up
-		 * 2 = CPU requested to be up before it had a chance
-		 *     to actually make itself down.
-		 * Any other value is a bug.
-		 */
-		BUG();
-	}
+	exynos_cpu_power_up(cpunr);
+	return 0;
+}
 
-	arch_spin_unlock(&exynos_mcpm_lock);
-	local_irq_enable();
+static int exynos_cluster_powerup(unsigned int cluster)
+{
+	pr_debug("%s: cluster %u\n", __func__, cluster);
+	if (cluster >= EXYNOS5420_NR_CLUSTERS)
+		return -EINVAL;
 
+	exynos_cluster_power_up(cluster);
 	return 0;
 }
 
-/*
- * NOTE: This function requires the stack data to be visible through power down
- * and can only be executed on processors like A15 and A7 that hit the cache
- * with the C bit clear in the SCTLR register.
- */
-static void exynos_power_down(void)
+static void exynos_cpu_powerdown_prepare(unsigned int cpu, unsigned int cluster)
 {
-	unsigned int mpidr, cpu, cluster;
-	bool last_man = false, skip_wfi = false;
-	unsigned int cpunr;
-
-	mpidr = read_cpuid_mpidr();
-	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
-	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
-	cpunr =  cpu + (cluster * EXYNOS5420_CPUS_PER_CLUSTER);
+	unsigned int cpunr = cpu + (cluster * EXYNOS5420_CPUS_PER_CLUSTER);
 
 	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
 	BUG_ON(cpu >= EXYNOS5420_CPUS_PER_CLUSTER ||
 			cluster >= EXYNOS5420_NR_CLUSTERS);
+	exynos_cpu_power_down(cpunr);
+}
 
-	__mcpm_cpu_going_down(cpu, cluster);
-
-	arch_spin_lock(&exynos_mcpm_lock);
-	BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP);
-	cpu_use_count[cpu][cluster]--;
-	if (cpu_use_count[cpu][cluster] == 0) {
-		exynos_cpu_power_down(cpunr);
-
-		if (exynos_cluster_unused(cluster)) {
-			exynos_cluster_power_down(cluster);
-			last_man = true;
-		}
-	} else if (cpu_use_count[cpu][cluster] == 1) {
-		/*
-		 * A power_up request went ahead of us.
-		 * Even if we do not want to shut this CPU down,
-		 * the caller expects a certain state as if the WFI
-		 * was aborted.  So let's continue with cache cleaning.
-		 */
-		skip_wfi = true;
-	} else {
-		BUG();
-	}
-
-	if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) {
-		arch_spin_unlock(&exynos_mcpm_lock);
-
-		if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A15) {
-			/*
-			 * On the Cortex-A15 we need to disable
-			 * L2 prefetching before flushing the cache.
-			 */
-			asm volatile(
-			"mcr	p15, 1, %0, c15, c0, 3\n\t"
-			"isb\n\t"
-			"dsb"
-			: : "r" (0x400));
-		}
+static void exynos_cluster_powerdown_prepare(unsigned int cluster)
+{
+	pr_debug("%s: cluster %u\n", __func__, cluster);
+	BUG_ON(cluster >= EXYNOS5420_NR_CLUSTERS);
+	exynos_cluster_power_down(cluster);
+}
 
-		/* Flush all cache levels for this cluster. */
-		exynos_v7_exit_coherency_flush(all);
+static void exynos_cpu_cache_disable(void)
+{
+	/* Disable and flush the local CPU cache. */
+	exynos_v7_exit_coherency_flush(louis);
+}
 
+static void exynos_cluster_cache_disable(void)
+{
+	if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A15) {
 		/*
-		 * Disable cluster-level coherency by masking
-		 * incoming snoops and DVM messages:
+		 * On the Cortex-A15 we need to disable
+		 * L2 prefetching before flushing the cache.
 		 */
-		cci_disable_port_by_cpu(mpidr);
-
-		__mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN);
-	} else {
-		arch_spin_unlock(&exynos_mcpm_lock);
-
-		/* Disable and flush the local CPU cache. */
-		exynos_v7_exit_coherency_flush(louis);
+		asm volatile(
+		"mcr	p15, 1, %0, c15, c0, 3\n\t"
+		"isb\n\t"
+		"dsb"
+		: : "r" (0x400));
 	}
 
-	__mcpm_cpu_down(cpu, cluster);
-
-	/* Now we are prepared for power-down, do it: */
-	if (!skip_wfi)
-		wfi();
+	/* Flush all cache levels for this cluster. */
+	exynos_v7_exit_coherency_flush(all);
 
-	/* Not dead at this point?  Let our caller cope. */
+	/*
+	 * Disable cluster-level coherency by masking
+	 * incoming snoops and DVM messages:
+	 */
+	cci_disable_port_by_cpu(read_cpuid_mpidr());
 }
 
 static int exynos_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
@@ -222,10 +142,8 @@ static int exynos_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
 
 	/* Wait for the core state to be OFF */
 	while (tries--) {
-		if (ACCESS_ONCE(cpu_use_count[cpu][cluster]) == 0) {
-			if ((exynos_cpu_power_state(cpunr) == 0))
-				return 0; /* success: the CPU is halted */
-		}
+		if ((exynos_cpu_power_state(cpunr) == 0))
+			return 0; /* success: the CPU is halted */
 
 		/* Otherwise, wait and retry: */
 		msleep(1);
@@ -234,63 +152,23 @@ static int exynos_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
 	return -ETIMEDOUT; /* timeout */
 }
 
-static void exynos_powered_up(void)
-{
-	unsigned int mpidr, cpu, cluster;
-
-	mpidr = read_cpuid_mpidr();
-	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
-	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
-
-	arch_spin_lock(&exynos_mcpm_lock);
-	if (cpu_use_count[cpu][cluster] == 0)
-		cpu_use_count[cpu][cluster] = 1;
-	arch_spin_unlock(&exynos_mcpm_lock);
-}
-
-static void exynos_suspend(u64 residency)
+static void exynos_cpu_is_up(unsigned int cpu, unsigned int cluster)
 {
-	unsigned int mpidr, cpunr;
-
-	exynos_power_down();
-
-	/*
-	 * Execution reaches here only if cpu did not power down.
-	 * Hence roll back the changes done in exynos_power_down function.
-	 *
-	 * CAUTION: "This function requires the stack data to be visible through
-	 * power down and can only be executed on processors like A15 and A7
-	 * that hit the cache with the C bit clear in the SCTLR register."
-	*/
-	mpidr = read_cpuid_mpidr();
-	cpunr = exynos_pmu_cpunr(mpidr);
-
-	exynos_cpu_power_up(cpunr);
+	/* especially when resuming: make sure power control is set */
+	exynos_cpu_powerup(cpu, cluster);
 }
 
 static const struct mcpm_platform_ops exynos_power_ops = {
-	.power_up		= exynos_power_up,
-	.power_down		= exynos_power_down,
+	.cpu_powerup		= exynos_cpu_powerup,
+	.cluster_powerup	= exynos_cluster_powerup,
+	.cpu_powerdown_prepare	= exynos_cpu_powerdown_prepare,
+	.cluster_powerdown_prepare = exynos_cluster_powerdown_prepare,
+	.cpu_cache_disable	= exynos_cpu_cache_disable,
+	.cluster_cache_disable	= exynos_cluster_cache_disable,
 	.wait_for_powerdown	= exynos_wait_for_powerdown,
-	.suspend		= exynos_suspend,
-	.powered_up		= exynos_powered_up,
+	.cpu_is_up		= exynos_cpu_is_up,
 };
 
-static void __init exynos_mcpm_usage_count_init(void)
-{
-	unsigned int mpidr, cpu, cluster;
-
-	mpidr = read_cpuid_mpidr();
-	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
-	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
-
-	pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);
-	BUG_ON(cpu >= EXYNOS5420_CPUS_PER_CLUSTER  ||
-			cluster >= EXYNOS5420_NR_CLUSTERS);
-
-	cpu_use_count[cpu][cluster] = 1;
-}
-
 /*
  * Enable cluster-level coherency, in preparation for turning on the MMU.
  */
@@ -302,19 +180,6 @@ static void __naked exynos_pm_power_up_setup(unsigned int affinity_level)
 	"b	cci_enable_port_for_self");
 }
 
-static void __init exynos_cache_off(void)
-{
-	if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A15) {
-		/* disable L2 prefetching on the Cortex-A15 */
-		asm volatile(
-		"mcr	p15, 1, %0, c15, c0, 3\n\t"
-		"isb\n\t"
-		"dsb"
-		: : "r" (0x400));
-	}
-	exynos_v7_exit_coherency_flush(all);
-}
-
 static const struct of_device_id exynos_dt_mcpm_match[] = {
 	{ .compatible = "samsung,exynos5420" },
 	{ .compatible = "samsung,exynos5800" },
@@ -370,13 +235,11 @@ static int __init exynos_mcpm_init(void)
 	 */
 	pmu_raw_writel(EXYNOS5420_SWRESET_KFC_SEL, S5P_PMU_SPARE3);
 
-	exynos_mcpm_usage_count_init();
-
 	ret = mcpm_platform_register(&exynos_power_ops);
 	if (!ret)
 		ret = mcpm_sync_init(exynos_pm_power_up_setup);
 	if (!ret)
-		ret = mcpm_loopback(exynos_cache_off); /* turn on the CCI */
+		ret = mcpm_loopback(exynos_cluster_cache_disable); /* turn on the CCI */
 	if (ret) {
 		iounmap(ns_sram_base_addr);
 		return ret;


Nicolas

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RFC/RFT 5/6] ARM: Exynos: migrate DCSCB to the new MCPM backend abstraction
  2015-03-24 23:24   ` Nicolas Pitre
@ 2015-03-25  9:40     ` Daniel Lezcano
  2015-03-25 16:02       ` Nicolas Pitre
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Lezcano @ 2015-03-25  9:40 UTC (permalink / raw)
  To: linux-arm-kernel

On 03/25/2015 12:24 AM, Nicolas Pitre wrote:
> On Wed, 18 Mar 2015, Nicolas Pitre wrote:
>
>> The custom suspend callback is removed for this change. That includes
>> the dubious call to exynos_cpu_power_up(() that was present at the end
>> of exynos_suspend().
>
> After testing on actual hardware, it turns out that this call is
> important.  This patch is therefore amended with the following:
>
> diff --git a/arch/arm/mach-exynos/mcpm-exynos.c b/arch/arm/mach-exynos/mcpm-exynos.c
> index d4bbbfb5fe..9bdf54795f 100644
> --- a/arch/arm/mach-exynos/mcpm-exynos.c
> +++ b/arch/arm/mach-exynos/mcpm-exynos.c
> @@ -152,6 +152,12 @@ static int exynos_wait_for_powerdown(unsigned int cpu, unsigned int cluster)
>   	return -ETIMEDOUT; /* timeout */
>   }
>
> +static void exynos_cpu_is_up(unsigned int cpu, unsigned int cluster)
> +{
> +	/* especially when resuming: make sure power control is set */
> +	exynos_cpu_powerup(cpu, cluster);
> +}
> +
>   static const struct mcpm_platform_ops exynos_power_ops = {
>   	.cpu_powerup		= exynos_cpu_powerup,
>   	.cluster_powerup	= exynos_cluster_powerup,
> @@ -160,6 +166,7 @@ static const struct mcpm_platform_ops exynos_power_ops = {
>   	.cpu_cache_disable	= exynos_cpu_cache_disable,
>   	.cluster_cache_disable	= exynos_cluster_cache_disable,
>   	.wait_for_powerdown	= exynos_wait_for_powerdown,
> +	.cpu_is_up		= exynos_cpu_is_up,
>   };
>
>   /*
>
> The whole commit now appears as follows in my git tree:
>
> commit 0d86b0b4cf869fa48d96bde231b9d04ea68b6422
> Author: Nicolas Pitre <nicolas.pitre@linaro.org>
> Date:   Mon Mar 16 17:16:07 2015 -0400
>
>      ARM: Exynos: migrate DCSCB to the new MCPM backend abstraction
>
>      The custom suspend callback is removed for this change. The extra call
>      to exynos_cpu_power_up(() that was present at the end of exynos_suspend()
>      is now relocated to the cpu_is_up callback.
>
>      Signed-off-by: Nicolas Pitre <nico@linaro.org>

Tested on exynos5800 (chromebook2).

Tested-by: Daniel Lezcano <daniel.lezcano@linaro.org>

-- 
  <http://www.linaro.org/> Linaro.org ? Open source software for ARM SoCs

Follow Linaro:  <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH RFC/RFT 5/6] ARM: Exynos: migrate DCSCB to the new MCPM backend abstraction
  2015-03-25  9:40     ` Daniel Lezcano
@ 2015-03-25 16:02       ` Nicolas Pitre
  0 siblings, 0 replies; 10+ messages in thread
From: Nicolas Pitre @ 2015-03-25 16:02 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, 25 Mar 2015, Daniel Lezcano wrote:

> On 03/25/2015 12:24 AM, Nicolas Pitre wrote:
> > On Wed, 18 Mar 2015, Nicolas Pitre wrote:
> >
> > > The custom suspend callback is removed for this change. That includes
> > > the dubious call to exynos_cpu_power_up(() that was present at the end
> > > of exynos_suspend().
> >
> > After testing on actual hardware, it turns out that this call is
> > important.  This patch is therefore amended with the following:
> >
> > diff --git a/arch/arm/mach-exynos/mcpm-exynos.c
> > b/arch/arm/mach-exynos/mcpm-exynos.c
> > index d4bbbfb5fe..9bdf54795f 100644
> > --- a/arch/arm/mach-exynos/mcpm-exynos.c
> > +++ b/arch/arm/mach-exynos/mcpm-exynos.c
> > @@ -152,6 +152,12 @@ static int exynos_wait_for_powerdown(unsigned int cpu,
> > unsigned int cluster)
> >   	return -ETIMEDOUT; /* timeout */
> >   }
> >
> > +static void exynos_cpu_is_up(unsigned int cpu, unsigned int cluster)
> > +{
> > +	/* especially when resuming: make sure power control is set */
> > +	exynos_cpu_powerup(cpu, cluster);
> > +}
> > +
> >   static const struct mcpm_platform_ops exynos_power_ops = {
> >    .cpu_powerup		= exynos_cpu_powerup,
> >    .cluster_powerup	= exynos_cluster_powerup,
> > @@ -160,6 +166,7 @@ static const struct mcpm_platform_ops exynos_power_ops =
> > {
> >    .cpu_cache_disable	= exynos_cpu_cache_disable,
> >    .cluster_cache_disable	= exynos_cluster_cache_disable,
> >    .wait_for_powerdown	= exynos_wait_for_powerdown,
> > +	.cpu_is_up		= exynos_cpu_is_up,
> >   };
> >
> >   /*
> >
> > The whole commit now appears as follows in my git tree:
> >
> > commit 0d86b0b4cf869fa48d96bde231b9d04ea68b6422
> > Author: Nicolas Pitre <nicolas.pitre@linaro.org>
> > Date:   Mon Mar 16 17:16:07 2015 -0400
> >
> > ARM: Exynos: migrate DCSCB to the new MCPM backend abstraction
> >
> >      The custom suspend callback is removed for this change. The extra call
> >      to exynos_cpu_power_up(() that was present at the end of
> >      exynos_suspend()
> >      is now relocated to the cpu_is_up callback.
> >
> >      Signed-off-by: Nicolas Pitre <nico@linaro.org>
> 
> Tested on exynos5800 (chromebook2).
> 
> Tested-by: Daniel Lezcano <daniel.lezcano@linaro.org>

Thanks!


Nicolas

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2015-03-25 16:02 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-18 18:04 [PATCH RFC/RFT 0/6] MCPM refactoring and major backend simplification Nicolas Pitre
2015-03-18 18:04 ` [PATCH RFC/RFT 1/6] MCPM: move the algorithmic complexity to the core code Nicolas Pitre
2015-03-18 18:04 ` [PATCH RFC/RFT 2/6] ARM: vexpress: migrate TC2 to the new MCPM backend abstraction Nicolas Pitre
2015-03-18 18:04 ` [PATCH RFC/RFT 3/6] ARM: vexpress: DCSCB: tighten CPU validity assertion Nicolas Pitre
2015-03-18 18:04 ` [PATCH RFC/RFT 4/6] ARM: vexpress: migrate DCSCB to the new MCPM backend abstraction Nicolas Pitre
2015-03-18 18:04 ` [PATCH RFC/RFT 5/6] ARM: Exynos: " Nicolas Pitre
2015-03-24 23:24   ` Nicolas Pitre
2015-03-25  9:40     ` Daniel Lezcano
2015-03-25 16:02       ` Nicolas Pitre
2015-03-18 18:04 ` [PATCH RFC/RFT 6/6] ARM: hisi/hip04: remove the MCPM overhead Nicolas Pitre

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).