All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support
@ 2013-07-10 22:11 Mark Rutland
  2013-07-10 22:13 ` [RFC PATCH 1/5] arm64: reorganise smp_enable_ops Mark Rutland
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Mark Rutland @ 2013-07-10 22:11 UTC (permalink / raw)
  To: linux-arm-kernel

The following patches add basic CPU_HOTPLUG support to arm64, which
combined with appropriate firmware (e.g. [1]) can be used to power CPUs
up and down dynamically. From discussions at connect it seemed that
several people were interested in working in this area, so I thought I'd
make my current implementation public now that I've managed to regain
access to my inbox.

I've tested this series with the bootwrapper PSCI implementation I've
placed on linux-arm.org [1] and a modified foundation model dts with a
psci node and each CPU's enable-method set to "psci", using a shell
while repeatedly cycling all cpus off and on:

for C in $(seq 0 3); do
	./cyclichotplug.sh $C >/dev/null 2>&1 &
done

---->8----
#!/bin/sh
# cyclichotplug.sh

CPU=$1;

if [ -z "$CPU" ]; then
	printf "Usage: $0 <cpu id>\n";
	exit 1;
fi

ONLINEFILE=/sys/devices/system/cpu/cpu$CPU/online;

while true; do
	echo 0 > $ONLINEFILE;
	echo 1 > $ONLINEFILE;
done
---->8----

Patches are based on v3.10.

Thanks,
Mark.

[1] http://linux-arm.org/git?p=boot-wrapper-aarch64.git;a=shortlog;h=refs/tags/simple-psci

Mark Rutland (5):
  arm64: reorganise smp_enable_ops
  arm64: factor out spin-table boot method
  arm64: read enable-method for CPU0
  arm64: add CPU_HOTPLUG infrastructure
  arm64: add PSCI CPU_OFF-based hotplug support

 arch/arm64/Kconfig                 |    7 ++
 arch/arm64/include/asm/irq.h       |    1 +
 arch/arm64/include/asm/smp.h       |   37 +++++--
 arch/arm64/kernel/cputable.c       |    2 +-
 arch/arm64/kernel/head.S           |   12 +-
 arch/arm64/kernel/irq.c            |   61 ++++++++++
 arch/arm64/kernel/process.c        |    7 ++
 arch/arm64/kernel/smp.c            |  215 ++++++++++++++++++++++--------------
 arch/arm64/kernel/smp_psci.c       |   54 +++++++--
 arch/arm64/kernel/smp_spin_table.c |   85 +++++++++++++-
 arch/arm64/kernel/vmlinux.lds.S    |    1 -
 11 files changed, 375 insertions(+), 107 deletions(-)

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC PATCH 1/5] arm64: reorganise smp_enable_ops
  2013-07-10 22:11 [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support Mark Rutland
@ 2013-07-10 22:13 ` Mark Rutland
  2013-07-10 22:15 ` [RFC PATCH 2/5] arm64: factor out spin-table boot method Mark Rutland
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2013-07-10 22:13 UTC (permalink / raw)
  To: linux-arm-kernel

For hotplug support, we're going to want a place to store operations
that do more than bring CPUs online, and it makes sense to group these
with our current smp_enable_ops.

This patch renames smp_enable_ops to smp_ops to make the intended use of
the structure clearer. While we're at it, we mark the necessary structs
and functions as __cpuinit* rather than __init*, fix up instances of the
cpu parameter to be an unsigned int, and rename the *_cpu functions to
cpu_* to reduce future churn when smp_operations is extended.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
---
 arch/arm64/include/asm/smp.h       |   10 +++++-----
 arch/arm64/kernel/smp.c            |   24 ++++++++++++------------
 arch/arm64/kernel/smp_psci.c       |   10 +++++-----
 arch/arm64/kernel/smp_spin_table.c |   10 +++++-----
 4 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
index 4b8023c..90626b6 100644
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -68,13 +68,13 @@ extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
 
 struct device_node;
 
-struct smp_enable_ops {
+struct smp_operations {
 	const char	*name;
-	int		(*init_cpu)(struct device_node *, int);
-	int		(*prepare_cpu)(int);
+	int		(*cpu_init)(struct device_node *, unsigned int);
+	int		(*cpu_prepare)(unsigned int);
 };
 
-extern const struct smp_enable_ops smp_spin_table_ops;
-extern const struct smp_enable_ops smp_psci_ops;
+extern const struct smp_operations smp_spin_table_ops;
+extern const struct smp_operations smp_psci_ops;
 
 #endif /* ifndef __ASM_SMP_H */
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 5d54e37..207bf4f 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -235,17 +235,17 @@ void __init smp_prepare_boot_cpu(void)
 
 static void (*smp_cross_call)(const struct cpumask *, unsigned int);
 
-static const struct smp_enable_ops *enable_ops[] __initconst = {
+static const struct smp_operations *supported_smp_ops[] __initconst = {
 	&smp_spin_table_ops,
 	&smp_psci_ops,
 	NULL,
 };
 
-static const struct smp_enable_ops *smp_enable_ops[NR_CPUS];
+static const struct smp_operations *smp_ops[NR_CPUS];
 
-static const struct smp_enable_ops * __init smp_get_enable_ops(const char *name)
+static const struct smp_operations * __init smp_get_ops(const char *name)
 {
-	const struct smp_enable_ops **ops = enable_ops;
+	const struct smp_operations **ops = supported_smp_ops;
 
 	while (*ops) {
 		if (!strcmp(name, (*ops)->name))
@@ -266,7 +266,7 @@ void __init smp_init_cpus(void)
 {
 	const char *enable_method;
 	struct device_node *dn = NULL;
-	int i, cpu = 1;
+	unsigned int i, cpu = 1;
 	bool bootcpu_valid = false;
 
 	while ((dn = of_find_node_by_type(dn, "cpu"))) {
@@ -345,15 +345,15 @@ void __init smp_init_cpus(void)
 			goto next;
 		}
 
-		smp_enable_ops[cpu] = smp_get_enable_ops(enable_method);
+		smp_ops[cpu] = smp_get_ops(enable_method);
 
-		if (!smp_enable_ops[cpu]) {
+		if (!smp_ops[cpu]) {
 			pr_err("%s: invalid enable-method property: %s\n",
 			       dn->full_name, enable_method);
 			goto next;
 		}
 
-		if (smp_enable_ops[cpu]->init_cpu(dn, cpu))
+		if (smp_ops[cpu]->cpu_init(dn, cpu))
 			goto next;
 
 		pr_debug("cpu logical map 0x%llx\n", hwid);
@@ -383,8 +383,8 @@ next:
 
 void __init smp_prepare_cpus(unsigned int max_cpus)
 {
-	int cpu, err;
-	unsigned int ncores = num_possible_cpus();
+	int err;
+	unsigned int cpu, ncores = num_possible_cpus();
 
 	/*
 	 * are we trying to boot more cores than exist?
@@ -411,10 +411,10 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
 		if (cpu == smp_processor_id())
 			continue;
 
-		if (!smp_enable_ops[cpu])
+		if (!smp_ops[cpu])
 			continue;
 
-		err = smp_enable_ops[cpu]->prepare_cpu(cpu);
+		err = smp_ops[cpu]->cpu_prepare(cpu);
 		if (err)
 			continue;
 
diff --git a/arch/arm64/kernel/smp_psci.c b/arch/arm64/kernel/smp_psci.c
index 0c53330..e833930 100644
--- a/arch/arm64/kernel/smp_psci.c
+++ b/arch/arm64/kernel/smp_psci.c
@@ -23,12 +23,12 @@
 #include <asm/psci.h>
 #include <asm/smp_plat.h>
 
-static int __init smp_psci_init_cpu(struct device_node *dn, int cpu)
+static int __cpuinit smp_psci_cpu_init(struct device_node *dn, unsigned int cpu)
 {
 	return 0;
 }
 
-static int __init smp_psci_prepare_cpu(int cpu)
+static int __cpuinit smp_psci_cpu_prepare(unsigned int cpu)
 {
 	int err;
 
@@ -46,8 +46,8 @@ static int __init smp_psci_prepare_cpu(int cpu)
 	return 0;
 }
 
-const struct smp_enable_ops smp_psci_ops __initconst = {
+const struct smp_operations smp_psci_ops __cpuinitconst = {
 	.name		= "psci",
-	.init_cpu	= smp_psci_init_cpu,
-	.prepare_cpu	= smp_psci_prepare_cpu,
+	.cpu_init	= smp_psci_cpu_init,
+	.cpu_prepare	= smp_psci_cpu_prepare,
 };
diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
index 7c35fa6..098bf64 100644
--- a/arch/arm64/kernel/smp_spin_table.c
+++ b/arch/arm64/kernel/smp_spin_table.c
@@ -24,7 +24,7 @@
 
 static phys_addr_t cpu_release_addr[NR_CPUS];
 
-static int __init smp_spin_table_init_cpu(struct device_node *dn, int cpu)
+static int __cpuinit smp_spin_table_cpu_init(struct device_node *dn, unsigned int cpu)
 {
 	/*
 	 * Determine the address from which the CPU is polling.
@@ -40,7 +40,7 @@ static int __init smp_spin_table_init_cpu(struct device_node *dn, int cpu)
 	return 0;
 }
 
-static int __init smp_spin_table_prepare_cpu(int cpu)
+static int __cpuinit smp_spin_table_cpu_prepare(unsigned int cpu)
 {
 	void **release_addr;
 
@@ -59,8 +59,8 @@ static int __init smp_spin_table_prepare_cpu(int cpu)
 	return 0;
 }
 
-const struct smp_enable_ops smp_spin_table_ops __initconst = {
+const struct smp_operations smp_spin_table_ops __cpuinitconst = {
 	.name		= "spin-table",
-	.init_cpu 	= smp_spin_table_init_cpu,
-	.prepare_cpu	= smp_spin_table_prepare_cpu,
+	.cpu_init	= smp_spin_table_cpu_init,
+	.cpu_prepare	= smp_spin_table_cpu_prepare,
 };
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH 2/5] arm64: factor out spin-table boot method
  2013-07-10 22:11 [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support Mark Rutland
  2013-07-10 22:13 ` [RFC PATCH 1/5] arm64: reorganise smp_enable_ops Mark Rutland
@ 2013-07-10 22:15 ` Mark Rutland
  2013-07-10 22:15 ` [RFC PATCH 3/5] arm64: read enable-method for CPU0 Mark Rutland
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2013-07-10 22:15 UTC (permalink / raw)
  To: linux-arm-kernel

The arm64 kernel has an internal holding pen, which is necessary for
some systems where we can't bring CPUs online individually and must hold
multiple CPUs in a safe area until the kernel is able to handle them.
The current SMP infrastructure for arm64 is closely coupled to this
holding pen, and alternative boot methods must launch CPUs into the pen,
from whence they are launched into the kernel proper.

With PSCI (and possibly other future boot methods), we can bring CPUs
online individually, and need not perform the secondary_holding_pen
dance. Instead, this patch factors the holding pen management code out
to the spin-table boot method code, as it is the only boot method
requiring the pen.

A new entry point for secondaries, secondary_entry is added for other
boot methods to use, which bypasses the holding pen and its associated
overhead when bringing CPUs online. The smp.pen.text section is also
removed, as the pen can live in head.text without problem.

The smp_operations structure is extended with two new functions,
cpu_boot and cpu_postboot, for bringing a cpu into the kernel and
performing any post-boot cleanup required by a bootmethod (e.g.
resetting the secondary_holding_pen_release to INVALID_HWID).

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
---
 arch/arm64/include/asm/smp.h       |   17 +++++++-
 arch/arm64/kernel/head.S           |   12 +++++-
 arch/arm64/kernel/smp.c            |   67 +++-----------------------------
 arch/arm64/kernel/smp_psci.c       |   16 ++++----
 arch/arm64/kernel/smp_spin_table.c |   75 ++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/vmlinux.lds.S    |    1 -
 6 files changed, 115 insertions(+), 73 deletions(-)

diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
index 90626b6..af39644 100644
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -60,8 +60,7 @@ struct secondary_data {
 	void *stack;
 };
 extern struct secondary_data secondary_data;
-extern void secondary_holding_pen(void);
-extern volatile unsigned long secondary_holding_pen_release;
+extern void secondary_entry(void);
 
 extern void arch_send_call_function_single_ipi(int cpu);
 extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
@@ -70,8 +69,22 @@ struct device_node;
 
 struct smp_operations {
 	const char	*name;
+	/*
+	 * Check devicetree data for cpu
+	 */
 	int		(*cpu_init)(struct device_node *, unsigned int);
+	/*
+	 * Test if cpu is present and bootable
+	 */
 	int		(*cpu_prepare)(unsigned int);
+	/*
+	 * Boot cpu into the kernel
+	 */
+	int		(*cpu_boot)(unsigned int);
+	/*
+	 * Performs post-boot cleanup
+	 */
+	void		(*cpu_postboot)(void);
 };
 
 extern const struct smp_operations smp_spin_table_ops;
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 53dcae4..3532ca6 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -217,7 +217,6 @@ ENTRY(__boot_cpu_mode)
 	.quad	PAGE_OFFSET
 
 #ifdef CONFIG_SMP
-	.pushsection    .smp.pen.text, "ax"
 	.align	3
 1:	.quad	.
 	.quad	secondary_holding_pen_release
@@ -242,7 +241,16 @@ pen:	ldr	x4, [x3]
 	wfe
 	b	pen
 ENDPROC(secondary_holding_pen)
-	.popsection
+
+	/*
+	 * Secondary entry point that jumps straight into the kernel. Only to
+	 * be used where CPUs are brought online dynamically by the kernel.
+	 */
+ENTRY(secondary_entry)
+	bl	__calc_phys_offset		// x2=phys offset
+	bl	el2_setup			// Drop to EL1
+	b	secondary_startup
+ENDPROC(secondary_entry)
 
 ENTRY(secondary_startup)
 	/*
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 207bf4f..e3a4fa1 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -54,7 +54,6 @@
  * where to place its SVC stack
  */
 struct secondary_data secondary_data;
-volatile unsigned long secondary_holding_pen_release = INVALID_HWID;
 
 enum ipi_msg_type {
 	IPI_RESCHEDULE,
@@ -63,22 +62,7 @@ enum ipi_msg_type {
 	IPI_CPU_STOP,
 };
 
-static DEFINE_RAW_SPINLOCK(boot_lock);
-
-/*
- * Write secondary_holding_pen_release in a way that is guaranteed to be
- * visible to all observers, irrespective of whether they're taking part
- * in coherency or not.  This is necessary for the hotplug code to work
- * reliably.
- */
-static void __cpuinit write_pen_release(u64 val)
-{
-	void *start = (void *)&secondary_holding_pen_release;
-	unsigned long size = sizeof(secondary_holding_pen_release);
-
-	secondary_holding_pen_release = val;
-	__flush_dcache_area(start, size);
-}
+static const struct smp_operations *smp_ops[NR_CPUS];
 
 /*
  * Boot a secondary CPU, and assign it the specified idle task.
@@ -86,38 +70,10 @@ static void __cpuinit write_pen_release(u64 val)
  */
 static int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
 {
-	unsigned long timeout;
-
-	/*
-	 * Set synchronisation state between this boot processor
-	 * and the secondary one
-	 */
-	raw_spin_lock(&boot_lock);
-
-	/*
-	 * Update the pen release flag.
-	 */
-	write_pen_release(cpu_logical_map(cpu));
-
-	/*
-	 * Send an event, causing the secondaries to read pen_release.
-	 */
-	sev();
-
-	timeout = jiffies + (1 * HZ);
-	while (time_before(jiffies, timeout)) {
-		if (secondary_holding_pen_release == INVALID_HWID)
-			break;
-		udelay(10);
-	}
-
-	/*
-	 * Now the secondary core is starting up let it run its
-	 * calibrations, then wait for it to finish
-	 */
-	raw_spin_unlock(&boot_lock);
+	if (smp_ops[cpu]->cpu_boot)
+		return smp_ops[cpu]->cpu_boot(cpu);
 
-	return secondary_holding_pen_release != INVALID_HWID ? -ENOSYS : 0;
+	return -EOPNOTSUPP;
 }
 
 static DECLARE_COMPLETION(cpu_running);
@@ -187,17 +143,8 @@ asmlinkage void __cpuinit secondary_start_kernel(void)
 	preempt_disable();
 	trace_hardirqs_off();
 
-	/*
-	 * Let the primary processor know we're out of the
-	 * pen, then head off into the C entry point
-	 */
-	write_pen_release(INVALID_HWID);
-
-	/*
-	 * Synchronise with the boot thread.
-	 */
-	raw_spin_lock(&boot_lock);
-	raw_spin_unlock(&boot_lock);
+	if (smp_ops[cpu]->cpu_postboot)
+		smp_ops[cpu]->cpu_postboot();
 
 	/*
 	 * Enable local interrupts.
@@ -241,8 +188,6 @@ static const struct smp_operations *supported_smp_ops[] __initconst = {
 	NULL,
 };
 
-static const struct smp_operations *smp_ops[NR_CPUS];
-
 static const struct smp_operations * __init smp_get_ops(const char *name)
 {
 	const struct smp_operations **ops = supported_smp_ops;
diff --git a/arch/arm64/kernel/smp_psci.c b/arch/arm64/kernel/smp_psci.c
index e833930..24dbad9 100644
--- a/arch/arm64/kernel/smp_psci.c
+++ b/arch/arm64/kernel/smp_psci.c
@@ -30,24 +30,26 @@ static int __cpuinit smp_psci_cpu_init(struct device_node *dn, unsigned int cpu)
 
 static int __cpuinit smp_psci_cpu_prepare(unsigned int cpu)
 {
-	int err;
-
 	if (!psci_ops.cpu_on) {
 		pr_err("psci: no cpu_on method, not booting CPU%d\n", cpu);
 		return -ENODEV;
 	}
 
-	err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa(secondary_holding_pen));
-	if (err) {
+	return 0;
+}
+
+static int __cpuinit smp_psci_cpu_boot(unsigned int cpu)
+{
+	int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa(secondary_entry));
+	if (err)
 		pr_err("psci: failed to boot CPU%d (%d)\n", cpu, err);
-		return err;
-	}
 
-	return 0;
+	return err;
 }
 
 const struct smp_operations smp_psci_ops __cpuinitconst = {
 	.name		= "psci",
 	.cpu_init	= smp_psci_cpu_init,
 	.cpu_prepare	= smp_psci_cpu_prepare,
+	.cpu_boot	= smp_psci_cpu_boot,
 };
diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
index 098bf64..c4c116e 100644
--- a/arch/arm64/kernel/smp_spin_table.c
+++ b/arch/arm64/kernel/smp_spin_table.c
@@ -16,13 +16,36 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <linux/delay.h>
 #include <linux/init.h>
 #include <linux/of.h>
 #include <linux/smp.h>
 
 #include <asm/cacheflush.h>
+#include <asm/cputype.h>
+#include <asm/smp_plat.h>
+
+extern void secondary_holding_pen(void);
+volatile unsigned long secondary_holding_pen_release = INVALID_HWID;
 
 static phys_addr_t cpu_release_addr[NR_CPUS];
+static DEFINE_RAW_SPINLOCK(boot_lock);
+
+/*
+ * Write secondary_holding_pen_release in a way that is guaranteed to be
+ * visible to all observers, irrespective of whether they're taking part
+ * in coherency or not.  This is necessary for the hotplug code to work
+ * reliably.
+ */
+static void __cpuinit write_pen_release(u64 val)
+{
+	void *start = (void *)&secondary_holding_pen_release;
+	unsigned long size = sizeof(secondary_holding_pen_release);
+
+	secondary_holding_pen_release = val;
+	__flush_dcache_area(start, size);
+}
+
 
 static int __cpuinit smp_spin_table_cpu_init(struct device_node *dn, unsigned int cpu)
 {
@@ -59,8 +82,60 @@ static int __cpuinit smp_spin_table_cpu_prepare(unsigned int cpu)
 	return 0;
 }
 
+static int smp_spin_table_cpu_boot(unsigned int cpu)
+{
+	unsigned long timeout;
+
+	/*
+	 * Set synchronisation state between this boot processor
+	 * and the secondary one
+	 */
+	raw_spin_lock(&boot_lock);
+
+	/*
+	 * Update the pen release flag.
+	 */
+	write_pen_release(cpu_logical_map(cpu));
+
+	/*
+	 * Send an event, causing the secondaries to read pen_release.
+	 */
+	sev();
+
+	timeout = jiffies + (1 * HZ);
+	while (time_before(jiffies, timeout)) {
+		if (secondary_holding_pen_release == INVALID_HWID)
+			break;
+		udelay(10);
+	}
+
+	/*
+	 * Now the secondary core is starting up let it run its
+	 * calibrations, then wait for it to finish
+	 */
+	raw_spin_unlock(&boot_lock);
+
+	return secondary_holding_pen_release != INVALID_HWID ? -ENOSYS : 0;
+}
+
+void __cpuinit smp_spin_table_cpu_postboot(void)
+{
+	/*
+	 * Let the primary processor know we're out of the pen.
+	 */
+	write_pen_release(INVALID_HWID);
+
+	/*
+	 * Synchronise with the boot thread.
+	 */
+	raw_spin_lock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
+}
+
 const struct smp_operations smp_spin_table_ops __cpuinitconst = {
 	.name		= "spin-table",
 	.cpu_init	= smp_spin_table_cpu_init,
 	.cpu_prepare	= smp_spin_table_cpu_prepare,
+	.cpu_boot	= smp_spin_table_cpu_boot,
+	.cpu_postboot	= smp_spin_table_cpu_postboot,
 };
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 3fae2be..2c8a95b 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -41,7 +41,6 @@ SECTIONS
 	}
 	.text : {			/* Real text segment		*/
 		_stext = .;		/* Text and read-only data	*/
-			*(.smp.pen.text)
 			__exception_text_start = .;
 			*(.exception.text)
 			__exception_text_end = .;
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH 3/5] arm64: read enable-method for CPU0
  2013-07-10 22:11 [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support Mark Rutland
  2013-07-10 22:13 ` [RFC PATCH 1/5] arm64: reorganise smp_enable_ops Mark Rutland
  2013-07-10 22:15 ` [RFC PATCH 2/5] arm64: factor out spin-table boot method Mark Rutland
@ 2013-07-10 22:15 ` Mark Rutland
  2013-07-10 22:15 ` [RFC PATCH 4/5] arm64: add CPU_HOTPLUG infrastructure Mark Rutland
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2013-07-10 22:15 UTC (permalink / raw)
  To: linux-arm-kernel

With the advent of CPU_HOTPLUG, the enable-method property for CPU0
may tells us something useful (i.e. how to hotplug it back on), so
we must read it along with all the enable-method for all the other CPUs.

This patch ensures that CPU0's enable-method property is read.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
---
 arch/arm64/kernel/smp.c |   15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index e3a4fa1..15ec428 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -253,6 +253,8 @@ void __init smp_init_cpus(void)
 			}
 		}
 
+		enable_method = of_get_property(dn, "enable-method", NULL);
+
 		/*
 		 * The numbering scheme requires that the boot CPU
 		 * must be assigned logical id 0. Record it so that
@@ -268,11 +270,12 @@ void __init smp_init_cpus(void)
 
 			bootcpu_valid = true;
 
+			if (enable_method)
+				smp_ops[0] = smp_get_ops(enable_method);
+
 			/*
-			 * cpu_logical_map has already been
-			 * initialized and the boot cpu doesn't need
-			 * the enable-method so continue without
-			 * incrementing cpu.
+			 * cpu_logical_map has already been initialized so
+			 * continue without incrementing cpu.
 			 */
 			continue;
 		}
@@ -280,10 +283,6 @@ void __init smp_init_cpus(void)
 		if (cpu >= NR_CPUS)
 			goto next;
 
-		/*
-		 * We currently support only the "spin-table" enable-method.
-		 */
-		enable_method = of_get_property(dn, "enable-method", NULL);
 		if (!enable_method) {
 			pr_err("%s: missing enable-method property\n",
 				dn->full_name);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH 4/5] arm64: add CPU_HOTPLUG infrastructure
  2013-07-10 22:11 [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support Mark Rutland
                   ` (2 preceding siblings ...)
  2013-07-10 22:15 ` [RFC PATCH 3/5] arm64: read enable-method for CPU0 Mark Rutland
@ 2013-07-10 22:15 ` Mark Rutland
  2013-07-10 23:59   ` Russell King - ARM Linux
  2013-07-11  9:33   ` Stephen Boyd
  2013-07-10 22:16 ` [RFC PATCH 5/5] arm64: add PSCI CPU_OFF-based hotplug support Mark Rutland
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 15+ messages in thread
From: Mark Rutland @ 2013-07-10 22:15 UTC (permalink / raw)
  To: linux-arm-kernel

This patch adds the basic infrastructure necessary to support
CPU_HOTPLUG on arm64, based on the arm implementation. Actual hotplug
support will depend on an implementation's smp_operations (e.g. PSCI).

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
---
 arch/arm64/Kconfig           |    7 +++
 arch/arm64/include/asm/irq.h |    1 +
 arch/arm64/include/asm/smp.h |   10 ++++
 arch/arm64/kernel/cputable.c |    2 +-
 arch/arm64/kernel/irq.c      |   61 +++++++++++++++++++++++
 arch/arm64/kernel/process.c  |    7 +++
 arch/arm64/kernel/smp.c      |  111 ++++++++++++++++++++++++++++++++++++++++++
 7 files changed, 198 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 56b3f6d..3a74435 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -150,6 +150,13 @@ config NR_CPUS
 	depends on SMP
 	default "4"
 
+config HOTPLUG_CPU
+	bool "Support for hot-pluggable CPUs"
+	depends on SMP && HOTPLUG
+	help
+	  Say Y here to experiment with turning CPUs off and on.  CPUs
+	  can be controlled through /sys/devices/system/cpu.
+
 source kernel/Kconfig.preempt
 
 config HZ
diff --git a/arch/arm64/include/asm/irq.h b/arch/arm64/include/asm/irq.h
index 0332fc0..e1f7ecd 100644
--- a/arch/arm64/include/asm/irq.h
+++ b/arch/arm64/include/asm/irq.h
@@ -4,6 +4,7 @@
 #include <asm-generic/irq.h>
 
 extern void (*handle_arch_irq)(struct pt_regs *);
+extern void migrate_irqs(void);
 extern void set_handle_irq(void (*handle_irq)(struct pt_regs *));
 
 #endif
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
index af39644..7c55ef4 100644
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -67,6 +67,11 @@ extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
 
 struct device_node;
 
+extern int __cpu_disable(void);
+
+extern void __cpu_die(unsigned int cpu);
+extern void cpu_die(void);
+
 struct smp_operations {
 	const char	*name;
 	/*
@@ -85,6 +90,11 @@ struct smp_operations {
 	 * Performs post-boot cleanup
 	 */
 	void		(*cpu_postboot)(void);
+#ifdef CONFIG_HOTPLUG_CPU
+	int  (*cpu_disable)(unsigned int cpu);
+	void (*cpu_die)(unsigned int cpu);
+	int  (*cpu_kill)(unsigned int cpu);
+#endif
 };
 
 extern const struct smp_operations smp_spin_table_ops;
diff --git a/arch/arm64/kernel/cputable.c b/arch/arm64/kernel/cputable.c
index 63cfc4a..847bfcf 100644
--- a/arch/arm64/kernel/cputable.c
+++ b/arch/arm64/kernel/cputable.c
@@ -22,7 +22,7 @@
 
 extern unsigned long __cpu_setup(void);
 
-struct cpu_info __initdata cpu_table[] = {
+struct cpu_info __cpuinitdata cpu_table[] = {
 	{
 		.cpu_id_val	= 0x000f0000,
 		.cpu_id_mask	= 0x000f0000,
diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c
index ecb3354..473e5db 100644
--- a/arch/arm64/kernel/irq.c
+++ b/arch/arm64/kernel/irq.c
@@ -81,3 +81,64 @@ void __init init_IRQ(void)
 	if (!handle_arch_irq)
 		panic("No interrupt controller found.");
 }
+
+#ifdef CONFIG_HOTPLUG_CPU
+static bool migrate_one_irq(struct irq_desc *desc)
+{
+	struct irq_data *d = irq_desc_get_irq_data(desc);
+	const struct cpumask *affinity = d->affinity;
+	struct irq_chip *c;
+	bool ret = false;
+
+	/*
+	 * If this is a per-CPU interrupt, or the affinity does not
+	 * include this CPU, then we have nothing to do.
+	 */
+	if (irqd_is_per_cpu(d) || !cpumask_test_cpu(smp_processor_id(), affinity))
+		return false;
+
+	if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) {
+		affinity = cpu_online_mask;
+		ret = true;
+	}
+
+	c = irq_data_get_irq_chip(d);
+	if (!c->irq_set_affinity)
+		pr_debug("IRQ%u: unable to set affinity\n", d->irq);
+	else if (c->irq_set_affinity(d, affinity, true) == IRQ_SET_MASK_OK && ret)
+		cpumask_copy(d->affinity, affinity);
+
+	return ret;
+}
+
+/*
+ * The current CPU has been marked offline.  Migrate IRQs off this CPU.
+ * If the affinity settings do not allow other CPUs, force them onto any
+ * available CPU.
+ *
+ * Note: we must iterate over all IRQs, whether they have an attached
+ * action structure or not, as we need to get chained interrupts too.
+ */
+void migrate_irqs(void)
+{
+	unsigned int i;
+	struct irq_desc *desc;
+	unsigned long flags;
+
+	local_irq_save(flags);
+
+	for_each_irq_desc(i, desc) {
+		bool affinity_broken;
+
+		raw_spin_lock(&desc->lock);
+		affinity_broken = migrate_one_irq(desc);
+		raw_spin_unlock(&desc->lock);
+
+		if (affinity_broken)
+			pr_warn_ratelimited("IRQ%u no longer affine to CPU%u\n",
+					    i, smp_processor_id());
+	}
+
+	local_irq_restore(flags);
+}
+#endif /* CONFIG_HOTPLUG_CPU */
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 46f02c3..4caf198 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -102,6 +102,13 @@ void arch_cpu_idle(void)
 	local_irq_enable();
 }
 
+#ifdef CONFIG_HOTPLUG_CPU
+void arch_cpu_idle_dead(void)
+{
+       cpu_die();
+}
+#endif
+
 void machine_shutdown(void)
 {
 #ifdef CONFIG_SMP
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 15ec428..07dcb72 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -167,6 +167,117 @@ asmlinkage void __cpuinit secondary_start_kernel(void)
 	cpu_startup_entry(CPUHP_ONLINE);
 }
 
+#ifdef CONFIG_HOTPLUG_CPU
+static int __cpuinit op_cpu_kill(unsigned int cpu)
+{
+	if (smp_ops[cpu]->cpu_kill)
+		return smp_ops[cpu]->cpu_kill(cpu);
+	return 1;
+}
+
+static int __cpuinit op_cpu_disable(unsigned int cpu)
+{
+	/* CPU0 may not have smp_ops */
+	if (!smp_ops[cpu])
+		return -EPERM;
+
+	if (smp_ops[cpu]->cpu_disable)
+		return smp_ops[cpu]->cpu_disable(cpu);
+
+	return 0;
+}
+
+/*
+ * __cpu_disable runs on the processor to be shutdown.
+ */
+int __cpuinit __cpu_disable(void)
+{
+	unsigned int cpu = smp_processor_id();
+	int ret;
+
+	ret = op_cpu_disable(cpu);
+	if (ret)
+		return ret;
+
+	/*
+	 * Take this CPU offline.  Once we clear this, we can't return,
+	 * and we must not schedule until we're ready to give up the cpu.
+	 */
+	set_cpu_online(cpu, false);
+
+	/*
+	 * OK - migrate IRQs away from this CPU
+	 */
+	migrate_irqs();
+
+	/*
+	 * Remove this CPU from the vm mask set of all processes.
+	 */
+	clear_tasks_mm_cpumask(cpu);
+
+	return 0;
+}
+
+static DECLARE_COMPLETION(cpu_died);
+
+/*
+ * called on the thread which is asking for a CPU to be shutdown -
+ * waits until shutdown has completed, or it is timed out.
+ */
+void __cpuinit __cpu_die(unsigned int cpu)
+{
+	if (!wait_for_completion_timeout(&cpu_died, msecs_to_jiffies(5000))) {
+		pr_err("CPU%u: cpu didn't die\n", cpu);
+		return;
+	}
+	pr_notice("CPU%u: shutdown\n", cpu);
+
+	if (!op_cpu_kill(cpu))
+		pr_warn("CPU%u: unable to kill\n", cpu);
+}
+
+/*
+ * Called from the idle thread for the CPU which has been shutdown.
+ *
+ * Note that we disable IRQs here, but do not re-enable them
+ * before returning to the caller. This is also the behaviour
+ * of the other hotplug-cpu capable cores, so presumably coming
+ * out of idle fixes this.
+ */
+void __ref cpu_die(void)
+{
+	unsigned int cpu = smp_processor_id();
+
+	idle_task_exit();
+
+	local_irq_disable();
+	mb();
+
+	/* Tell __cpu_die() that this CPU is now safe to dispose of */
+	RCU_NONIDLE(complete(&cpu_died));
+
+	/*
+	 * actual CPU shutdown procedure is at least platform (if not
+	 * CPU) specific.
+	 */
+	if (smp_ops[cpu]->cpu_die)
+		smp_ops[cpu]->cpu_die(cpu);
+
+	/*
+	 * Do not return to the idle loop - jump back to the secondary
+	 * cpu initialisation.  There's some initialisation which needs
+	 * to be repeated to undo the effects of taking the CPU offline.
+	 */
+	__asm__("mov	sp, %0\n"
+	"	mov	x29, #0\n"
+	"	b	secondary_start_kernel"
+		:
+		: "r" (task_stack_page(current) + THREAD_START_SP));
+}
+#endif
+
+
+
 void __init smp_cpus_done(unsigned int max_cpus)
 {
 	unsigned long bogosum = loops_per_jiffy * num_online_cpus();
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH 5/5] arm64: add PSCI CPU_OFF-based hotplug support
  2013-07-10 22:11 [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support Mark Rutland
                   ` (3 preceding siblings ...)
  2013-07-10 22:15 ` [RFC PATCH 4/5] arm64: add CPU_HOTPLUG infrastructure Mark Rutland
@ 2013-07-10 22:16 ` Mark Rutland
  2013-07-11 15:10 ` [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support Hanjun Guo
  2013-07-15 10:47 ` Mark Rutland
  6 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2013-07-10 22:16 UTC (permalink / raw)
  To: linux-arm-kernel

This patch adds support for using PSCI CPU_OFF calls for CPU hotplug.
With this code it is possible to hot unplug CPUs with "psci" as their
boot-method, as long as there's an appropriate cpu_off function id
specified in the psci node.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
---
 arch/arm64/kernel/smp_psci.c |   30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/arch/arm64/kernel/smp_psci.c b/arch/arm64/kernel/smp_psci.c
index 24dbad9..d536f3e 100644
--- a/arch/arm64/kernel/smp_psci.c
+++ b/arch/arm64/kernel/smp_psci.c
@@ -47,9 +47,39 @@ static int __cpuinit smp_psci_cpu_boot(unsigned int cpu)
 	return err;
 }
 
+#ifdef CONFIG_HOTPLUG_CPU
+static int __cpuinit smp_psci_cpu_disable(unsigned int cpu)
+{
+	/* Fail early if we don't have CPU_OFF support */
+	if (!psci_ops.cpu_off)
+		return -EOPNOTSUPP;
+	return 0;
+}
+
+static void __cpuinit smp_psci_cpu_die(unsigned int cpu)
+{
+	int ret;
+	/*
+	 * There are no known implementations of PSCI actually using the
+	 * power state field, pass a sensible default for now.
+	 */
+	struct psci_power_state state = {
+		.type = PSCI_POWER_STATE_TYPE_POWER_DOWN,
+	};
+
+	ret = psci_ops.cpu_off(state);
+
+	pr_warn("psci: unable to power off CPU%u (%d)", cpu, ret);
+}
+#endif
+
 const struct smp_operations smp_psci_ops __cpuinitconst = {
 	.name		= "psci",
 	.cpu_init	= smp_psci_cpu_init,
 	.cpu_prepare	= smp_psci_cpu_prepare,
 	.cpu_boot	= smp_psci_cpu_boot,
+#ifdef CONFIG_HOTPLUG_CPU
+	.cpu_disable	= smp_psci_cpu_disable,
+	.cpu_die	= smp_psci_cpu_die,
+#endif
 };
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH 4/5] arm64: add CPU_HOTPLUG infrastructure
  2013-07-10 22:15 ` [RFC PATCH 4/5] arm64: add CPU_HOTPLUG infrastructure Mark Rutland
@ 2013-07-10 23:59   ` Russell King - ARM Linux
  2013-07-11  9:33   ` Stephen Boyd
  1 sibling, 0 replies; 15+ messages in thread
From: Russell King - ARM Linux @ 2013-07-10 23:59 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 10, 2013 at 11:15:49PM +0100, Mark Rutland wrote:
> +	/*
> +	 * actual CPU shutdown procedure is at least platform (if not
> +	 * CPU) specific.
> +	 */
> +	if (smp_ops[cpu]->cpu_die)
> +		smp_ops[cpu]->cpu_die(cpu);
> +
> +	/*
> +	 * Do not return to the idle loop - jump back to the secondary
> +	 * cpu initialisation.  There's some initialisation which needs
> +	 * to be repeated to undo the effects of taking the CPU offline.
> +	 */
> +	__asm__("mov	sp, %0\n"
> +	"	mov	x29, #0\n"
> +	"	b	secondary_start_kernel"
> +		:
> +		: "r" (task_stack_page(current) + THREAD_START_SP));

Don't make that same mistake as on AArch32 - this will result in some
people thinking its acceptable for them to return from cpu_die().  Hot
unplug is supposed to take the CPU offline so it can be restarted as if
it was never booted in the first place.

The above "hack" in AArch32 is only there because of the sillyness of
the 32-bit ARM evaluation boards not having a way to place the
secondary CPUs back into reset or power the secondary CPUs off.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC PATCH 4/5] arm64: add CPU_HOTPLUG infrastructure
  2013-07-10 22:15 ` [RFC PATCH 4/5] arm64: add CPU_HOTPLUG infrastructure Mark Rutland
  2013-07-10 23:59   ` Russell King - ARM Linux
@ 2013-07-11  9:33   ` Stephen Boyd
  2013-07-11 11:10     ` Mark Rutland
  1 sibling, 1 reply; 15+ messages in thread
From: Stephen Boyd @ 2013-07-11  9:33 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/10, Mark Rutland wrote:
> +void __ref cpu_die(void)
> +{
> +	unsigned int cpu = smp_processor_id();
> +
> +	idle_task_exit();
> +
> +	local_irq_disable();
> +	mb();
> +
> +	/* Tell __cpu_die() that this CPU is now safe to dispose of */
> +	RCU_NONIDLE(complete(&cpu_died));

This isn't true anymore.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC PATCH 4/5] arm64: add CPU_HOTPLUG infrastructure
  2013-07-11  9:33   ` Stephen Boyd
@ 2013-07-11 11:10     ` Mark Rutland
  0 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2013-07-11 11:10 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 11, 2013 at 10:33:59AM +0100, Stephen Boyd wrote:
> On 07/10, Mark Rutland wrote:
> > +void __ref cpu_die(void)
> > +{
> > +	unsigned int cpu = smp_processor_id();
> > +
> > +	idle_task_exit();
> > +
> > +	local_irq_disable();
> > +	mb();
> > +
> > +	/* Tell __cpu_die() that this CPU is now safe to dispose of */
> > +	RCU_NONIDLE(complete(&cpu_died));
> 
> This isn't true anymore.

Ah. I'd missed the RCU_NONIDLE while rebasing the series. I'll fix that up.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support
  2013-07-10 22:11 [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support Mark Rutland
                   ` (4 preceding siblings ...)
  2013-07-10 22:16 ` [RFC PATCH 5/5] arm64: add PSCI CPU_OFF-based hotplug support Mark Rutland
@ 2013-07-11 15:10 ` Hanjun Guo
  2013-07-11 16:34   ` Mark Rutland
  2013-07-15 10:47 ` Mark Rutland
  6 siblings, 1 reply; 15+ messages in thread
From: Hanjun Guo @ 2013-07-11 15:10 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Mark,

I tested this patch set on the armv8 foundation model, it's panic.

I seems that we need to do something more, I'll also checkout
what's going on here.

Thanks
Hanjun

dump formation:
root at genericarmv8:/sys/devices/system/cpu/cpu3# echo 0 > online
CPU3: Booted secondary processor
CPU3: shutdown
BUG: failure at kernel/time/clockevents.c:284/clockevents_register_device()!
Kernel panic - not syncing: BUG!
CPU: 3 PID: 0 Comm: swapper/3 Not tainted 3.10.0+ #2
Call trace:
[<ffffffc0000873e0>] dump_backtrace+0x0/0x12c
[<ffffffc000087520>] show_stack+0x14/0x1c
[<ffffffc0003f2e7c>] dump_stack+0x20/0x28
[<ffffffc0003efd24>] panic+0xe8/0x214
[<ffffffc0000d049c>] clockevents_set_mode+0x0/0x6c
[<ffffffc0000d074c>] clockevents_config_and_register+0x24/0x30
[<ffffffc0003ef840>] arch_timer_setup+0xd8/0x140
[<ffffffc0003ef8f0>] arch_timer_cpu_notify+0x48/0xc8
[<ffffffc0000b83e4>] notifier_call_chain+0x48/0x88
[<ffffffc0000b8500>] __raw_notifier_call_chain+0xc/0x14
[<ffffffc0000973f0>] __cpu_notify+0x30/0x58
[<ffffffc00009742c>] cpu_notify+0x14/0x1c
[<ffffffc0003ed46c>] notify_cpu_starting+0x14/0x1c
[<ffffffc0003ece74>] secondary_start_kernel+0xc0/0xf4
CPU2: stopping
CPU: 2 PID: 0 Comm: swapper/2 Not tainted 3.10.0+ #2
Call trace:
[<ffffffc0000873e0>] dump_backtrace+0x0/0x12c
[<ffffffc000087520>] show_stack+0x14/0x1c
[<ffffffc0003f2e7c>] dump_stack+0x20/0x28
[<ffffffc00008da8c>] handle_IPI+0x120/0x130
[<ffffffc0000812e0>] gic_handle_irq+0x7c/0x80
Exception stack(0xffffffc87fc8de30 to 0xffffffc87fc8df50)
de20: 7fc8c000 ffffffc8 005a1c8b ffffffc0
de40: 7fc8df70 ffffffc8 00084540 ffffffc0 00000e90 00000000 00000000 
00000000
de60: 3ffbb9ec ffffffc0 00010000 00000000 00000020 00000000 00000000 
00000000
de80: 3ffbb7f8 ffffffc0 00000000 00000000 7fc5ddd0 ffffffc8 7fc8dd80 
ffffffc8
dea0: 003fb8e4 ffffffc0 03ee8b1d 00000000 003fb8e8 ffffffc0 00000000 
00000000
dec0: 1f7458e8 ffffffbc 00001000 00000000 00000001 00000000 00002000 
00000000
dee0: 00000000 00000000 7fc8c000 ffffffc8 005a1c8b ffffffc0 00000001 
00000000
df00: 004dc3f0 ffffffc0 005adf00 ffffffc0 003faca0 ffffffc0 8007b000 
00000000
df20: 8007d000 00000000 000801a0 ffffffc0 80080188 00000000 7fc8df70 
ffffffc8
df40: 0008453c ffffffc0 7fc8df70 ffffffc8
[<ffffffc0000835ac>] el1_irq+0x6c/0xc0
[<ffffffc0000ca4f4>] cpu_startup_entry+0xf0/0x138
[<ffffffc0003ece9c>] secondary_start_kernel+0xe8/0xf4
CPU1: stopping
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 3.10.0+ #2
Call trace:
[<ffffffc0000873e0>] dump_backtrace+0x0/0x12c
[<ffffffc000087520>] show_stack+0x14/0x1c
[<ffffffc0003f2e7c>] dump_stack+0x20/0x28
[<ffffffc00008da8c>] handle_IPI+0x120/0x130
[<ffffffc0000812e0>] gic_handle_irq+0x7c/0x80
Exception stack(0xffffffc87fc8be30 to 0xffffffc87fc8bf50)
be20: 7fc8a000 ffffffc8 005a1c8b ffffffc0
be40: 7fc8bf70 ffffffc8 00084540 ffffffc0 00002088 00000000 00000000 
00000000
be60: 3ffb19ec ffffffc0 00010000 00000000 00000010 00000000 0fcf5932 
0000274b
be80: aba36180 0000008d 7f7bd8b0 ffffffc8 7fc5d4d0 ffffffc8 7fc8bd80 
ffffffc8
bea0: 00004d30 00000001 00000020 00000000 003fb8e8 ffffffc0 ae892f33 
ffffffff
bec0: 46000000 0021a0f7 00000000 003b9aca 0016dae4 ffffffc0 94294800 
0000007f
bee0: f0587350 0000007f 7fc8a000 ffffffc8 005a1c8b ffffffc0 00000001 
00000000
bf00: 004dc3f0 ffffffc0 005adf00 ffffffc0 003faca0 ffffffc0 8007b000 
00000000
bf20: 8007d000 00000000 000801a0 ffffffc0 80080188 00000000 7fc8bf70 
ffffffc8
bf40: 0008453c ffffffc0 7fc8bf70 ffffffc8
[<ffffffc0000835ac>] el1_irq+0x6c/0xc0
[<ffffffc0000ca4f4>] cpu_startup_entry+0xf0/0x138
[<ffffffc0003ece9c>] secondary_start_kernel+0xe8/0xf4
CPU: 0 PID: 957 Comm: sh Not tainted 3.10.0+ #2
Call trace:
[<ffffffc0000873e0>] dump_backtrace+0x0/0x12c
[<ffffffc000087520>] show_stack+0x14/0x1c
[<ffffffc0003f2e7c>] dump_stack+0x20/0x28
[<ffffffc00008da8c>] handle_IPI+0x120/0x130
[<ffffffc0000812e0>] gic_handle_irq+0x7c/0x80
Exception stack(0xffffffc87e1e7a70 to 0xffffffc87e1e7b90)
7a60: 00000007 00000000 0057b000 ffffffc0
7a80: 7e1e7bb0 ffffffc8 00096044 ffffffc0 00000000 00000000 3fa41000 
00000000
7aa0: 00000000 00000000 3ffa73d8 ffffffc0 00000000 00000000 00000000 
00000000
7ac0: 005b1000 ffffffc0 7e1e6000 ffffffc8 00000002 00000000 0057b000 
ffffffc0
7ae0: 00000001 00000000 00000030 00000000 000012b0 00000000 00000000 
00000000
7b00: 000000a6 00000000 00000006 00000000 005a8000 ffffffc0 b62e09b8 
0000007f
7b20: c32ba320 0000007f 00000007 00000000 0057b000 ffffffc0 00000005 
00000000
7b40: 005a4758 ffffffc0 00000140 00000000 0000000e 00000000 00000000 
00000000
7b60: 00000000 00000000 0000000e 00000000 00000006 00000000 7e1e7bb0 
ffffffc8
7b80: 000960f8 ffffffc0 7e1e7bb0 ffffffc8
[<ffffffc0000835ac>] el1_irq+0x6c/0xc0
[<ffffffc0003efec4>] printk+0x74/0x7c
[<ffffffc0003ecf68>] __cpu_die+0x50/0x8c
[<ffffffc0003eacd4>] _cpu_down.constprop.2+0xec/0x280
[<ffffffc0003eae8c>] cpu_down+0x24/0x40
[<ffffffc0003ebca4>] store_online+0x40/0xa4
[<ffffffc0002f1bf4>] dev_attr_store+0x18/0x28
[<ffffffc00018c0ec>] sysfs_write_file+0xdc/0x154
[<ffffffc00012fefc>] vfs_write+0xac/0x1a4
[<ffffffc00013032c>] SyS_write+0x44/0x8c


On 2013?07?11? 06:11, Mark Rutland wrote:
> The following patches add basic CPU_HOTPLUG support to arm64, which
> combined with appropriate firmware (e.g. [1]) can be used to power CPUs
> up and down dynamically. From discussions at connect it seemed that
> several people were interested in working in this area, so I thought I'd
> make my current implementation public now that I've managed to regain
> access to my inbox.
>
> I've tested this series with the bootwrapper PSCI implementation I've
> placed on linux-arm.org [1] and a modified foundation model dts with a
> psci node and each CPU's enable-method set to "psci", using a shell
> while repeatedly cycling all cpus off and on:
>
> for C in $(seq 0 3); do
> 	./cyclichotplug.sh $C >/dev/null 2>&1 &
> done
>
> ---->8----
> #!/bin/sh
> # cyclichotplug.sh
>
> CPU=$1;
>
> if [ -z "$CPU" ]; then
> 	printf "Usage: $0 <cpu id>\n";
> 	exit 1;
> fi
>
> ONLINEFILE=/sys/devices/system/cpu/cpu$CPU/online;
>
> while true; do
> 	echo 0 > $ONLINEFILE;
> 	echo 1 > $ONLINEFILE;
> done
> ---->8----
>
> Patches are based on v3.10.
>
> Thanks,
> Mark.
>
> [1] http://linux-arm.org/git?p=boot-wrapper-aarch64.git;a=shortlog;h=refs/tags/simple-psci
>
> Mark Rutland (5):
>    arm64: reorganise smp_enable_ops
>    arm64: factor out spin-table boot method
>    arm64: read enable-method for CPU0
>    arm64: add CPU_HOTPLUG infrastructure
>    arm64: add PSCI CPU_OFF-based hotplug support
>
>   arch/arm64/Kconfig                 |    7 ++
>   arch/arm64/include/asm/irq.h       |    1 +
>   arch/arm64/include/asm/smp.h       |   37 +++++--
>   arch/arm64/kernel/cputable.c       |    2 +-
>   arch/arm64/kernel/head.S           |   12 +-
>   arch/arm64/kernel/irq.c            |   61 ++++++++++
>   arch/arm64/kernel/process.c        |    7 ++
>   arch/arm64/kernel/smp.c            |  215 ++++++++++++++++++++++--------------
>   arch/arm64/kernel/smp_psci.c       |   54 +++++++--
>   arch/arm64/kernel/smp_spin_table.c |   85 +++++++++++++-
>   arch/arm64/kernel/vmlinux.lds.S    |    1 -
>   11 files changed, 375 insertions(+), 107 deletions(-)
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support
  2013-07-11 15:10 ` [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support Hanjun Guo
@ 2013-07-11 16:34   ` Mark Rutland
  2013-07-19 11:09     ` Hanjun Guo
  0 siblings, 1 reply; 15+ messages in thread
From: Mark Rutland @ 2013-07-11 16:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 11, 2013 at 04:10:51PM +0100, Hanjun Guo wrote:
> Hi Mark,

Hi Hanjun,

> 
> I tested this patch set on the armv8 foundation model, it's panic.
> 
> I seems that we need to do something more, I'll also checkout
> what's going on here.
> 
> Thanks
> Hanjun
> 
> dump formation:
> root at genericarmv8:/sys/devices/system/cpu/cpu3# echo 0 > online
> CPU3: Booted secondary processor
> CPU3: shutdown
> BUG: failure at kernel/time/clockevents.c:284/clockevents_register_device()!
> Kernel panic - not syncing: BUG!
> CPU: 3 PID: 0 Comm: swapper/3 Not tainted 3.10.0+ #2
> Call trace:
> [<ffffffc0000873e0>] dump_backtrace+0x0/0x12c
> [<ffffffc000087520>] show_stack+0x14/0x1c
> [<ffffffc0003f2e7c>] dump_stack+0x20/0x28
> [<ffffffc0003efd24>] panic+0xe8/0x214
> [<ffffffc0000d049c>] clockevents_set_mode+0x0/0x6c
> [<ffffffc0000d074c>] clockevents_config_and_register+0x24/0x30
> [<ffffffc0003ef840>] arch_timer_setup+0xd8/0x140
> [<ffffffc0003ef8f0>] arch_timer_cpu_notify+0x48/0xc8
> [<ffffffc0000b83e4>] notifier_call_chain+0x48/0x88
> [<ffffffc0000b8500>] __raw_notifier_call_chain+0xc/0x14
> [<ffffffc0000973f0>] __cpu_notify+0x30/0x58
> [<ffffffc00009742c>] cpu_notify+0x14/0x1c
> [<ffffffc0003ed46c>] notify_cpu_starting+0x14/0x1c
> [<ffffffc0003ece74>] secondary_start_kernel+0xc0/0xf4

That looks suspicious. It looks like the CPU didn't actually die and jumped
immediately to secondary_start_kernel. At a guess, did you update your dts with
a psci node and cpu enable-methods?

I can see the code's broken if you try hotplug with spin-table, because
cpu_disable and cpu_die are both NULL, and the sanity checking doesn't attempt
to deal with this case, so the cpu will end up getting into cpu_die, won't call
anything, and jump straight back into the kernel. I'll fix up the
op_cpu_disable checks to cover this.

The other possibility is that you're using PSCI but your function id for
cpu_off is wrong, and thus the psci cpu_off call fails. Did you update your dts
for PSCI? Below is a patch to do so.

Thanks,
Mark.

---->8----
>From ae35ce871b52ee006f8c5538b9be6c6829a71d6f Mon Sep 17 00:00:00 2001
From: Mark Rutland <mark.rutland@arm.com>
Date: Thu, 11 Jul 2013 17:09:56 +0100
Subject: [PATCH] HACK: arm64: dts: foundation: add PSCI data

---
 arch/arm64/boot/dts/foundation-v8.dts |   19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/boot/dts/foundation-v8.dts b/arch/arm64/boot/dts/foundation-v8.dts
index 84fcc50..335a88f 100644
--- a/arch/arm64/boot/dts/foundation-v8.dts
+++ b/arch/arm64/boot/dts/foundation-v8.dts
@@ -22,6 +22,13 @@
 		serial3 = &v2m_serial3;
 	};
 
+	psci {
+		compatible = "arm,psci";
+		method = "smc";
+		cpu_off = <0x84000001>;
+		cpu_on = <0x84000002>;
+	};
+
 	cpus {
 		#address-cells = <2>;
 		#size-cells = <0>;
@@ -30,29 +37,25 @@
 			device_type = "cpu";
 			compatible = "arm,armv8";
 			reg = <0x0 0x0>;
-			enable-method = "spin-table";
-			cpu-release-addr = <0x0 0x8000fff8>;
+			enable-method = "psci";
 		};
 		cpu at 1 {
 			device_type = "cpu";
 			compatible = "arm,armv8";
 			reg = <0x0 0x1>;
-			enable-method = "spin-table";
-			cpu-release-addr = <0x0 0x8000fff8>;
+			enable-method = "psci";
 		};
 		cpu at 2 {
 			device_type = "cpu";
 			compatible = "arm,armv8";
 			reg = <0x0 0x2>;
-			enable-method = "spin-table";
-			cpu-release-addr = <0x0 0x8000fff8>;
+			enable-method = "psci";
 		};
 		cpu at 3 {
 			device_type = "cpu";
 			compatible = "arm,armv8";
 			reg = <0x0 0x3>;
-			enable-method = "spin-table";
-			cpu-release-addr = <0x0 0x8000fff8>;
+			enable-method = "psci";
 		};
 	};
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support
  2013-07-10 22:11 [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support Mark Rutland
                   ` (5 preceding siblings ...)
  2013-07-11 15:10 ` [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support Hanjun Guo
@ 2013-07-15 10:47 ` Mark Rutland
  2013-07-15 13:25   ` Paul Gortmaker
  6 siblings, 1 reply; 15+ messages in thread
From: Mark Rutland @ 2013-07-15 10:47 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Catalin, Paul,

I've realised these patches are going to conflict with Paul's "arm64:
delete __cpuinit usage from all users", so I'd like to rebase this
series atop of it (removing my cpuinit additions as I do so).

Catalin, would you be happy to create a stable branch at some point soon
with Paul's arm64 patch for me to work atop of?

Thanks,
Mark.

On Wed, Jul 10, 2013 at 11:11:14PM +0100, Mark Rutland wrote:
> The following patches add basic CPU_HOTPLUG support to arm64, which
> combined with appropriate firmware (e.g. [1]) can be used to power CPUs
> up and down dynamically. From discussions at connect it seemed that
> several people were interested in working in this area, so I thought I'd
> make my current implementation public now that I've managed to regain
> access to my inbox.
> 
> I've tested this series with the bootwrapper PSCI implementation I've
> placed on linux-arm.org [1] and a modified foundation model dts with a
> psci node and each CPU's enable-method set to "psci", using a shell
> while repeatedly cycling all cpus off and on:
> 
> for C in $(seq 0 3); do
> 	./cyclichotplug.sh $C >/dev/null 2>&1 &
> done
> 
> ---->8----
> #!/bin/sh
> # cyclichotplug.sh
> 
> CPU=$1;
> 
> if [ -z "$CPU" ]; then
> 	printf "Usage: $0 <cpu id>\n";
> 	exit 1;
> fi
> 
> ONLINEFILE=/sys/devices/system/cpu/cpu$CPU/online;
> 
> while true; do
> 	echo 0 > $ONLINEFILE;
> 	echo 1 > $ONLINEFILE;
> done
> ---->8----
> 
> Patches are based on v3.10.
> 
> Thanks,
> Mark.
> 
> [1] http://linux-arm.org/git?p=boot-wrapper-aarch64.git;a=shortlog;h=refs/tags/simple-psci
> 
> Mark Rutland (5):
>   arm64: reorganise smp_enable_ops
>   arm64: factor out spin-table boot method
>   arm64: read enable-method for CPU0
>   arm64: add CPU_HOTPLUG infrastructure
>   arm64: add PSCI CPU_OFF-based hotplug support
> 
>  arch/arm64/Kconfig                 |    7 ++
>  arch/arm64/include/asm/irq.h       |    1 +
>  arch/arm64/include/asm/smp.h       |   37 +++++--
>  arch/arm64/kernel/cputable.c       |    2 +-
>  arch/arm64/kernel/head.S           |   12 +-
>  arch/arm64/kernel/irq.c            |   61 ++++++++++
>  arch/arm64/kernel/process.c        |    7 ++
>  arch/arm64/kernel/smp.c            |  215 ++++++++++++++++++++++--------------
>  arch/arm64/kernel/smp_psci.c       |   54 +++++++--
>  arch/arm64/kernel/smp_spin_table.c |   85 +++++++++++++-
>  arch/arm64/kernel/vmlinux.lds.S    |    1 -
>  11 files changed, 375 insertions(+), 107 deletions(-)
> 
> -- 
> 1.7.9.5
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support
  2013-07-15 10:47 ` Mark Rutland
@ 2013-07-15 13:25   ` Paul Gortmaker
  2013-07-15 13:45     ` Mark Rutland
  0 siblings, 1 reply; 15+ messages in thread
From: Paul Gortmaker @ 2013-07-15 13:25 UTC (permalink / raw)
  To: linux-arm-kernel

On 13-07-15 06:47 AM, Mark Rutland wrote:
> Hi Catalin, Paul,
> 
> I've realised these patches are going to conflict with Paul's "arm64:
> delete __cpuinit usage from all users", so I'd like to rebase this
> series atop of it (removing my cpuinit additions as I do so).
> 
> Catalin, would you be happy to create a stable branch at some point soon
> with Paul's arm64 patch for me to work atop of?

Now that 3.11-rc1 is out, I'll be sending a pull request
for phase two of the removal, including the arm64 patch,
so that should be mainline very shortly.

Paul.
--

> 
> Thanks,
> Mark.
> 
> On Wed, Jul 10, 2013 at 11:11:14PM +0100, Mark Rutland wrote:
>> The following patches add basic CPU_HOTPLUG support to arm64, which
>> combined with appropriate firmware (e.g. [1]) can be used to power CPUs
>> up and down dynamically. From discussions at connect it seemed that
>> several people were interested in working in this area, so I thought I'd
>> make my current implementation public now that I've managed to regain
>> access to my inbox.
>>
>> I've tested this series with the bootwrapper PSCI implementation I've
>> placed on linux-arm.org [1] and a modified foundation model dts with a
>> psci node and each CPU's enable-method set to "psci", using a shell
>> while repeatedly cycling all cpus off and on:
>>
>> for C in $(seq 0 3); do
>> 	./cyclichotplug.sh $C >/dev/null 2>&1 &
>> done
>>
>> ---->8----
>> #!/bin/sh
>> # cyclichotplug.sh
>>
>> CPU=$1;
>>
>> if [ -z "$CPU" ]; then
>> 	printf "Usage: $0 <cpu id>\n";
>> 	exit 1;
>> fi
>>
>> ONLINEFILE=/sys/devices/system/cpu/cpu$CPU/online;
>>
>> while true; do
>> 	echo 0 > $ONLINEFILE;
>> 	echo 1 > $ONLINEFILE;
>> done
>> ---->8----
>>
>> Patches are based on v3.10.
>>
>> Thanks,
>> Mark.
>>
>> [1] http://linux-arm.org/git?p=boot-wrapper-aarch64.git;a=shortlog;h=refs/tags/simple-psci
>>
>> Mark Rutland (5):
>>   arm64: reorganise smp_enable_ops
>>   arm64: factor out spin-table boot method
>>   arm64: read enable-method for CPU0
>>   arm64: add CPU_HOTPLUG infrastructure
>>   arm64: add PSCI CPU_OFF-based hotplug support
>>
>>  arch/arm64/Kconfig                 |    7 ++
>>  arch/arm64/include/asm/irq.h       |    1 +
>>  arch/arm64/include/asm/smp.h       |   37 +++++--
>>  arch/arm64/kernel/cputable.c       |    2 +-
>>  arch/arm64/kernel/head.S           |   12 +-
>>  arch/arm64/kernel/irq.c            |   61 ++++++++++
>>  arch/arm64/kernel/process.c        |    7 ++
>>  arch/arm64/kernel/smp.c            |  215 ++++++++++++++++++++++--------------
>>  arch/arm64/kernel/smp_psci.c       |   54 +++++++--
>>  arch/arm64/kernel/smp_spin_table.c |   85 +++++++++++++-
>>  arch/arm64/kernel/vmlinux.lds.S    |    1 -
>>  11 files changed, 375 insertions(+), 107 deletions(-)
>>
>> -- 
>> 1.7.9.5
>>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support
  2013-07-15 13:25   ` Paul Gortmaker
@ 2013-07-15 13:45     ` Mark Rutland
  0 siblings, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2013-07-15 13:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jul 15, 2013 at 02:25:13PM +0100, Paul Gortmaker wrote:
> On 13-07-15 06:47 AM, Mark Rutland wrote:
> > Hi Catalin, Paul,
> > 
> > I've realised these patches are going to conflict with Paul's "arm64:
> > delete __cpuinit usage from all users", so I'd like to rebase this
> > series atop of it (removing my cpuinit additions as I do so).
> > 
> > Catalin, would you be happy to create a stable branch at some point soon
> > with Paul's arm64 patch for me to work atop of?
> 
> Now that 3.11-rc1 is out, I'll be sending a pull request
> for phase two of the removal, including the arm64 patch,
> so that should be mainline very shortly.

Ok, I'll wait for that to hit.

Thanks,
Mark.

> 
> Paul.
> --
> 
> > 
> > Thanks,
> > Mark.
> > 
> > On Wed, Jul 10, 2013 at 11:11:14PM +0100, Mark Rutland wrote:
> >> The following patches add basic CPU_HOTPLUG support to arm64, which
> >> combined with appropriate firmware (e.g. [1]) can be used to power CPUs
> >> up and down dynamically. From discussions at connect it seemed that
> >> several people were interested in working in this area, so I thought I'd
> >> make my current implementation public now that I've managed to regain
> >> access to my inbox.
> >>
> >> I've tested this series with the bootwrapper PSCI implementation I've
> >> placed on linux-arm.org [1] and a modified foundation model dts with a
> >> psci node and each CPU's enable-method set to "psci", using a shell
> >> while repeatedly cycling all cpus off and on:
> >>
> >> for C in $(seq 0 3); do
> >> 	./cyclichotplug.sh $C >/dev/null 2>&1 &
> >> done
> >>
> >> ---->8----
> >> #!/bin/sh
> >> # cyclichotplug.sh
> >>
> >> CPU=$1;
> >>
> >> if [ -z "$CPU" ]; then
> >> 	printf "Usage: $0 <cpu id>\n";
> >> 	exit 1;
> >> fi
> >>
> >> ONLINEFILE=/sys/devices/system/cpu/cpu$CPU/online;
> >>
> >> while true; do
> >> 	echo 0 > $ONLINEFILE;
> >> 	echo 1 > $ONLINEFILE;
> >> done
> >> ---->8----
> >>
> >> Patches are based on v3.10.
> >>
> >> Thanks,
> >> Mark.
> >>
> >> [1] http://linux-arm.org/git?p=boot-wrapper-aarch64.git;a=shortlog;h=refs/tags/simple-psci
> >>
> >> Mark Rutland (5):
> >>   arm64: reorganise smp_enable_ops
> >>   arm64: factor out spin-table boot method
> >>   arm64: read enable-method for CPU0
> >>   arm64: add CPU_HOTPLUG infrastructure
> >>   arm64: add PSCI CPU_OFF-based hotplug support
> >>
> >>  arch/arm64/Kconfig                 |    7 ++
> >>  arch/arm64/include/asm/irq.h       |    1 +
> >>  arch/arm64/include/asm/smp.h       |   37 +++++--
> >>  arch/arm64/kernel/cputable.c       |    2 +-
> >>  arch/arm64/kernel/head.S           |   12 +-
> >>  arch/arm64/kernel/irq.c            |   61 ++++++++++
> >>  arch/arm64/kernel/process.c        |    7 ++
> >>  arch/arm64/kernel/smp.c            |  215 ++++++++++++++++++++++--------------
> >>  arch/arm64/kernel/smp_psci.c       |   54 +++++++--
> >>  arch/arm64/kernel/smp_spin_table.c |   85 +++++++++++++-
> >>  arch/arm64/kernel/vmlinux.lds.S    |    1 -
> >>  11 files changed, 375 insertions(+), 107 deletions(-)
> >>
> >> -- 
> >> 1.7.9.5
> >>
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support
  2013-07-11 16:34   ` Mark Rutland
@ 2013-07-19 11:09     ` Hanjun Guo
  0 siblings, 0 replies; 15+ messages in thread
From: Hanjun Guo @ 2013-07-19 11:09 UTC (permalink / raw)
  To: linux-arm-kernel

On 2013-7-12 0:34, Mark Rutland wrote:
> On Thu, Jul 11, 2013 at 04:10:51PM +0100, Hanjun Guo wrote:
>> Hi Mark,
>
> Hi Hanjun,
>
>>
>> I tested this patch set on the armv8 foundation model, it's panic.
>>
>> I seems that we need to do something more, I'll also checkout
>> what's going on here.
>>
>> Thanks
>> Hanjun
>>
>> dump formation:
>> root at genericarmv8:/sys/devices/system/cpu/cpu3# echo 0 > online
>> CPU3: Booted secondary processor
>> CPU3: shutdown
>> BUG: failure at kernel/time/clockevents.c:284/clockevents_register_device()!
>> Kernel panic - not syncing: BUG!
>> CPU: 3 PID: 0 Comm: swapper/3 Not tainted 3.10.0+ #2
>> Call trace:
>> [<ffffffc0000873e0>] dump_backtrace+0x0/0x12c
>> [<ffffffc000087520>] show_stack+0x14/0x1c
>> [<ffffffc0003f2e7c>] dump_stack+0x20/0x28
>> [<ffffffc0003efd24>] panic+0xe8/0x214
>> [<ffffffc0000d049c>] clockevents_set_mode+0x0/0x6c
>> [<ffffffc0000d074c>] clockevents_config_and_register+0x24/0x30
>> [<ffffffc0003ef840>] arch_timer_setup+0xd8/0x140
>> [<ffffffc0003ef8f0>] arch_timer_cpu_notify+0x48/0xc8
>> [<ffffffc0000b83e4>] notifier_call_chain+0x48/0x88
>> [<ffffffc0000b8500>] __raw_notifier_call_chain+0xc/0x14
>> [<ffffffc0000973f0>] __cpu_notify+0x30/0x58
>> [<ffffffc00009742c>] cpu_notify+0x14/0x1c
>> [<ffffffc0003ed46c>] notify_cpu_starting+0x14/0x1c
>> [<ffffffc0003ece74>] secondary_start_kernel+0xc0/0xf4
>
> That looks suspicious. It looks like the CPU didn't actually die and jumped
> immediately to secondary_start_kernel. At a guess, did you update your dts with
> a psci node and cpu enable-methods?
>
> I can see the code's broken if you try hotplug with spin-table, because
> cpu_disable and cpu_die are both NULL, and the sanity checking doesn't attempt
> to deal with this case, so the cpu will end up getting into cpu_die, won't call
> anything, and jump straight back into the kernel. I'll fix up the
> op_cpu_disable checks to cover this.
>
> The other possibility is that you're using PSCI but your function id for
> cpu_off is wrong, and thus the psci cpu_off call fails. Did you update your dts
> for PSCI? Below is a patch to do so.

Hi Mark,

Sorry for the late reply.

I updated the boot wrapper and applied the dts patch you provided, and then
I test the cpu online/offline, it works fine on the armv8 foundation model.
It also works fine combined with my ACPI CPU hot-plug driver, I can eject
(hot remove) a CPU now :)

Thanks
Hanjun

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2013-07-19 11:09 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-10 22:11 [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support Mark Rutland
2013-07-10 22:13 ` [RFC PATCH 1/5] arm64: reorganise smp_enable_ops Mark Rutland
2013-07-10 22:15 ` [RFC PATCH 2/5] arm64: factor out spin-table boot method Mark Rutland
2013-07-10 22:15 ` [RFC PATCH 3/5] arm64: read enable-method for CPU0 Mark Rutland
2013-07-10 22:15 ` [RFC PATCH 4/5] arm64: add CPU_HOTPLUG infrastructure Mark Rutland
2013-07-10 23:59   ` Russell King - ARM Linux
2013-07-11  9:33   ` Stephen Boyd
2013-07-11 11:10     ` Mark Rutland
2013-07-10 22:16 ` [RFC PATCH 5/5] arm64: add PSCI CPU_OFF-based hotplug support Mark Rutland
2013-07-11 15:10 ` [RFC PATCH 0/5] arm64: initial CPU_HOTPLUG support Hanjun Guo
2013-07-11 16:34   ` Mark Rutland
2013-07-19 11:09     ` Hanjun Guo
2013-07-15 10:47 ` Mark Rutland
2013-07-15 13:25   ` Paul Gortmaker
2013-07-15 13:45     ` Mark Rutland

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.