linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3 REPOST] arm: covert a few spinlock_t locks to raw_spinlock_t
@ 2018-12-07 10:27 Sebastian Andrzej Siewior
  2018-12-07 10:27 ` [PATCH 1/3] arm: Convert arm boot_lock to raw Sebastian Andrzej Siewior
                   ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-07 10:27 UTC (permalink / raw)
  To: linux-arm-kernel, Arnd Bergmann; +Cc: tglx, Russell King

Hi,

this small series converts a few spinlock_t locks to raw_spinlock_t.
This change makes no difference for !RT but is required for RT.
I added a few acks to first patch since the last repost [0]. Arnd
suggested that it might be best to route patch 1 via the arm-soc tree.
I hope that we can route patch 2 and 3 the same way.

[0] https://lkml.kernel.org/r/20180711110037.12928-1-bigeasy@linutronix.de

Sebastian



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/3] arm: Convert arm boot_lock to raw
  2018-12-07 10:27 [PATCH 0/3 REPOST] arm: covert a few spinlock_t locks to raw_spinlock_t Sebastian Andrzej Siewior
@ 2018-12-07 10:27 ` Sebastian Andrzej Siewior
  2018-12-07 12:49   ` Linus Walleij
  2018-12-07 13:00   ` Russell King - ARM Linux
  2018-12-07 10:27 ` [PATCH 2/3] arm: kprobe: make patch_lock a raw_spinlock_t Sebastian Andrzej Siewior
  2018-12-07 10:27 ` [PATCH 3/3] arm: use a raw_spinlock_t in unwind Sebastian Andrzej Siewior
  2 siblings, 2 replies; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-07 10:27 UTC (permalink / raw)
  To: linux-arm-kernel, Arnd Bergmann
  Cc: Tony Lindgren, Viresh Kumar, Linus Walleij, David Brown, Wei Xu,
	Manivannan Sadhasivam, linux-samsung-soc, Viresh Kumar,
	Russell King, Krzysztof Kozlowski, Maxime Ripard, Chen-Yu Tsai,
	Kukjin Kim, Andy Gross, Sebastian Andrzej Siewior, linux-arm-msm,
	tglx, linux-omap, Barry Song, Frank Rowand, Patrice Chotard,
	Shiraz Hashim, Andreas Färber

From: Frank Rowand <frank.rowand@am.sony.com>

The arm boot_lock is used by the secondary processor startup code.  The locking
task is the idle thread, which has idle->sched_class == &idle_sched_class.
idle_sched_class->enqueue_task == NULL, so if the idle task blocks on the
lock, the attempt to wake it when the lock becomes available will fail:

try_to_wake_up()
   ...
      activate_task()
         enqueue_task()
            p->sched_class->enqueue_task(rq, p, flags)

Fix by converting boot_lock to a raw spin lock.

Cc: Andreas Färber <afaerber@suse.de>
Cc: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Kukjin Kim <kgene@kernel.org>
Cc: Krzysztof Kozlowski <krzk@kernel.org>
Cc: Wei Xu <xuwei5@hisilicon.com>
Cc: Tony Lindgren <tony@atomide.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Andy Gross <andy.gross@linaro.org>
Cc: David Brown <david.brown@linaro.org>
Cc: Viresh Kumar <vireshk@kernel.org>
Cc: Shiraz Hashim <shiraz.linux.kernel@gmail.com>
Cc: Patrice Chotard <patrice.chotard@st.com>
Cc: Maxime Ripard <maxime.ripard@bootlin.com>
Cc: Chen-Yu Tsai <wens@csie.org>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-samsung-soc@vger.kernel.org
Cc: linux-omap@vger.kernel.org
Cc: linux-arm-msm@vger.kernel.org
Signed-off-by: Frank Rowand <frank.rowand@am.sony.com>
Link: http://lkml.kernel.org/r/4E77B952.3010606@am.sony.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Tony Lindgren <tony@atomide.com>
Acked-by: Krzysztof Kozlowski <krzk@kernel.org>
Tested-by: Krzysztof Kozlowski <krzk@kernel.org> (Exynos5422 Linaro PM-QA)
Acked-by: Andreas Färber <afaerber@suse.de> (for mach-actions)
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Patrice Chotard <patrice.chotard@st.com> (for mach-sti)
Acked-by: Wei Xu <xuwei5@hisilicon.com> (for mach-hisi)
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 arch/arm/mach-actions/platsmp.c   |  6 +++---
 arch/arm/mach-exynos/platsmp.c    | 12 ++++++------
 arch/arm/mach-hisi/platmcpm.c     | 22 +++++++++++-----------
 arch/arm/mach-omap2/omap-smp.c    | 10 +++++-----
 arch/arm/mach-prima2/platsmp.c    | 10 +++++-----
 arch/arm/mach-qcom/platsmp.c      | 10 +++++-----
 arch/arm/mach-spear/platsmp.c     | 10 +++++-----
 arch/arm/mach-sti/platsmp.c       | 10 +++++-----
 arch/arm/mach-sunxi/mc_smp.c      | 20 ++++++++++----------
 arch/arm/plat-versatile/platsmp.c | 10 +++++-----
 10 files changed, 60 insertions(+), 60 deletions(-)

diff --git a/arch/arm/mach-actions/platsmp.c b/arch/arm/mach-actions/platsmp.c
index 3efaa10efc439..770079245d273 100644
--- a/arch/arm/mach-actions/platsmp.c
+++ b/arch/arm/mach-actions/platsmp.c
@@ -39,7 +39,7 @@ static void __iomem *sps_base_addr;
 static void __iomem *timer_base_addr;
 static int ncores;
 
-static DEFINE_SPINLOCK(boot_lock);
+static DEFINE_RAW_SPINLOCK(boot_lock);
 
 void owl_secondary_startup(void);
 
@@ -93,7 +93,7 @@ static int s500_smp_boot_secondary(unsigned int cpu, struct task_struct *idle)
 
 	udelay(10);
 
-	spin_lock(&boot_lock);
+	raw_spin_lock(&boot_lock);
 
 	smp_send_reschedule(cpu);
 
@@ -106,7 +106,7 @@ static int s500_smp_boot_secondary(unsigned int cpu, struct task_struct *idle)
 	writel(0, timer_base_addr + OWL_CPU1_ADDR + (cpu - 1) * 4);
 	writel(0, timer_base_addr + OWL_CPU1_FLAG + (cpu - 1) * 4);
 
-	spin_unlock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 
 	return 0;
 }
diff --git a/arch/arm/mach-exynos/platsmp.c b/arch/arm/mach-exynos/platsmp.c
index 6a1e682371b32..17dca0ff336e0 100644
--- a/arch/arm/mach-exynos/platsmp.c
+++ b/arch/arm/mach-exynos/platsmp.c
@@ -239,7 +239,7 @@ static void write_pen_release(int val)
 	sync_cache_w(&pen_release);
 }
 
-static DEFINE_SPINLOCK(boot_lock);
+static DEFINE_RAW_SPINLOCK(boot_lock);
 
 static void exynos_secondary_init(unsigned int cpu)
 {
@@ -252,8 +252,8 @@ static void exynos_secondary_init(unsigned int cpu)
 	/*
 	 * Synchronise with the boot thread.
 	 */
-	spin_lock(&boot_lock);
-	spin_unlock(&boot_lock);
+	raw_spin_lock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 }
 
 int exynos_set_boot_addr(u32 core_id, unsigned long boot_addr)
@@ -317,7 +317,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle)
 	 * Set synchronisation state between this boot processor
 	 * and the secondary one
 	 */
-	spin_lock(&boot_lock);
+	raw_spin_lock(&boot_lock);
 
 	/*
 	 * The secondary processor is waiting to be released from
@@ -344,7 +344,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle)
 
 		if (timeout == 0) {
 			printk(KERN_ERR "cpu1 power enable failed");
-			spin_unlock(&boot_lock);
+			raw_spin_unlock(&boot_lock);
 			return -ETIMEDOUT;
 		}
 	}
@@ -390,7 +390,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle)
 	 * calibrations, then wait for it to finish
 	 */
 fail:
-	spin_unlock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 
 	return pen_release != -1 ? ret : 0;
 }
diff --git a/arch/arm/mach-hisi/platmcpm.c b/arch/arm/mach-hisi/platmcpm.c
index f66815c3dd07e..00524abd963f7 100644
--- a/arch/arm/mach-hisi/platmcpm.c
+++ b/arch/arm/mach-hisi/platmcpm.c
@@ -61,7 +61,7 @@
 
 static void __iomem *sysctrl, *fabric;
 static int hip04_cpu_table[HIP04_MAX_CLUSTERS][HIP04_MAX_CPUS_PER_CLUSTER];
-static DEFINE_SPINLOCK(boot_lock);
+static DEFINE_RAW_SPINLOCK(boot_lock);
 static u32 fabric_phys_addr;
 /*
  * [0]: bootwrapper physical address
@@ -113,7 +113,7 @@ static int hip04_boot_secondary(unsigned int l_cpu, struct task_struct *idle)
 	if (cluster >= HIP04_MAX_CLUSTERS || cpu >= HIP04_MAX_CPUS_PER_CLUSTER)
 		return -EINVAL;
 
-	spin_lock_irq(&boot_lock);
+	raw_spin_lock_irq(&boot_lock);
 
 	if (hip04_cpu_table[cluster][cpu])
 		goto out;
@@ -147,7 +147,7 @@ static int hip04_boot_secondary(unsigned int l_cpu, struct task_struct *idle)
 
 out:
 	hip04_cpu_table[cluster][cpu]++;
-	spin_unlock_irq(&boot_lock);
+	raw_spin_unlock_irq(&boot_lock);
 
 	return 0;
 }
@@ -162,11 +162,11 @@ static void hip04_cpu_die(unsigned int l_cpu)
 	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
 	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
 
-	spin_lock(&boot_lock);
+	raw_spin_lock(&boot_lock);
 	hip04_cpu_table[cluster][cpu]--;
 	if (hip04_cpu_table[cluster][cpu] == 1) {
 		/* A power_up request went ahead of us. */
-		spin_unlock(&boot_lock);
+		raw_spin_unlock(&boot_lock);
 		return;
 	} else if (hip04_cpu_table[cluster][cpu] > 1) {
 		pr_err("Cluster %d CPU%d boots multiple times\n", cluster, cpu);
@@ -174,7 +174,7 @@ static void hip04_cpu_die(unsigned int l_cpu)
 	}
 
 	last_man = hip04_cluster_is_down(cluster);
-	spin_unlock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 	if (last_man) {
 		/* Since it's Cortex A15, disable L2 prefetching. */
 		asm volatile(
@@ -203,7 +203,7 @@ static int hip04_cpu_kill(unsigned int l_cpu)
 	       cpu >= HIP04_MAX_CPUS_PER_CLUSTER);
 
 	count = TIMEOUT_MSEC / POLL_MSEC;
-	spin_lock_irq(&boot_lock);
+	raw_spin_lock_irq(&boot_lock);
 	for (tries = 0; tries < count; tries++) {
 		if (hip04_cpu_table[cluster][cpu])
 			goto err;
@@ -211,10 +211,10 @@ static int hip04_cpu_kill(unsigned int l_cpu)
 		data = readl_relaxed(sysctrl + SC_CPU_RESET_STATUS(cluster));
 		if (data & CORE_WFI_STATUS(cpu))
 			break;
-		spin_unlock_irq(&boot_lock);
+		raw_spin_unlock_irq(&boot_lock);
 		/* Wait for clean L2 when the whole cluster is down. */
 		msleep(POLL_MSEC);
-		spin_lock_irq(&boot_lock);
+		raw_spin_lock_irq(&boot_lock);
 	}
 	if (tries >= count)
 		goto err;
@@ -231,10 +231,10 @@ static int hip04_cpu_kill(unsigned int l_cpu)
 		goto err;
 	if (hip04_cluster_is_down(cluster))
 		hip04_set_snoop_filter(cluster, 0);
-	spin_unlock_irq(&boot_lock);
+	raw_spin_unlock_irq(&boot_lock);
 	return 1;
 err:
-	spin_unlock_irq(&boot_lock);
+	raw_spin_unlock_irq(&boot_lock);
 	return 0;
 }
 #endif
diff --git a/arch/arm/mach-omap2/omap-smp.c b/arch/arm/mach-omap2/omap-smp.c
index 1c73694c871ad..ac4d2f030b874 100644
--- a/arch/arm/mach-omap2/omap-smp.c
+++ b/arch/arm/mach-omap2/omap-smp.c
@@ -69,7 +69,7 @@ static const struct omap_smp_config omap5_cfg __initconst = {
 	.startup_addr = omap5_secondary_startup,
 };
 
-static DEFINE_SPINLOCK(boot_lock);
+static DEFINE_RAW_SPINLOCK(boot_lock);
 
 void __iomem *omap4_get_scu_base(void)
 {
@@ -177,8 +177,8 @@ static void omap4_secondary_init(unsigned int cpu)
 	/*
 	 * Synchronise with the boot thread.
 	 */
-	spin_lock(&boot_lock);
-	spin_unlock(&boot_lock);
+	raw_spin_lock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 }
 
 static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle)
@@ -191,7 +191,7 @@ static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle)
 	 * Set synchronisation state between this boot processor
 	 * and the secondary one
 	 */
-	spin_lock(&boot_lock);
+	raw_spin_lock(&boot_lock);
 
 	/*
 	 * Update the AuxCoreBoot0 with boot state for secondary core.
@@ -270,7 +270,7 @@ static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle)
 	 * Now the secondary core is starting up let it run its
 	 * calibrations, then wait for it to finish
 	 */
-	spin_unlock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 
 	return 0;
 }
diff --git a/arch/arm/mach-prima2/platsmp.c b/arch/arm/mach-prima2/platsmp.c
index 75ef5d4be554c..c17c86e5d860a 100644
--- a/arch/arm/mach-prima2/platsmp.c
+++ b/arch/arm/mach-prima2/platsmp.c
@@ -22,7 +22,7 @@
 
 static void __iomem *clk_base;
 
-static DEFINE_SPINLOCK(boot_lock);
+static DEFINE_RAW_SPINLOCK(boot_lock);
 
 static void sirfsoc_secondary_init(unsigned int cpu)
 {
@@ -36,8 +36,8 @@ static void sirfsoc_secondary_init(unsigned int cpu)
 	/*
 	 * Synchronise with the boot thread.
 	 */
-	spin_lock(&boot_lock);
-	spin_unlock(&boot_lock);
+	raw_spin_lock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 }
 
 static const struct of_device_id clk_ids[]  = {
@@ -75,7 +75,7 @@ static int sirfsoc_boot_secondary(unsigned int cpu, struct task_struct *idle)
 	/* make sure write buffer is drained */
 	mb();
 
-	spin_lock(&boot_lock);
+	raw_spin_lock(&boot_lock);
 
 	/*
 	 * The secondary processor is waiting to be released from
@@ -107,7 +107,7 @@ static int sirfsoc_boot_secondary(unsigned int cpu, struct task_struct *idle)
 	 * now the secondary core is starting up let it run its
 	 * calibrations, then wait for it to finish
 	 */
-	spin_unlock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 
 	return pen_release != -1 ? -ENOSYS : 0;
 }
diff --git a/arch/arm/mach-qcom/platsmp.c b/arch/arm/mach-qcom/platsmp.c
index 5494c9e0c909b..e8ce157d3548a 100644
--- a/arch/arm/mach-qcom/platsmp.c
+++ b/arch/arm/mach-qcom/platsmp.c
@@ -46,7 +46,7 @@
 
 extern void secondary_startup_arm(void);
 
-static DEFINE_SPINLOCK(boot_lock);
+static DEFINE_RAW_SPINLOCK(boot_lock);
 
 #ifdef CONFIG_HOTPLUG_CPU
 static void qcom_cpu_die(unsigned int cpu)
@@ -60,8 +60,8 @@ static void qcom_secondary_init(unsigned int cpu)
 	/*
 	 * Synchronise with the boot thread.
 	 */
-	spin_lock(&boot_lock);
-	spin_unlock(&boot_lock);
+	raw_spin_lock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 }
 
 static int scss_release_secondary(unsigned int cpu)
@@ -284,7 +284,7 @@ static int qcom_boot_secondary(unsigned int cpu, int (*func)(unsigned int))
 	 * set synchronisation state between this boot processor
 	 * and the secondary one
 	 */
-	spin_lock(&boot_lock);
+	raw_spin_lock(&boot_lock);
 
 	/*
 	 * Send the secondary CPU a soft interrupt, thereby causing
@@ -297,7 +297,7 @@ static int qcom_boot_secondary(unsigned int cpu, int (*func)(unsigned int))
 	 * now the secondary core is starting up let it run its
 	 * calibrations, then wait for it to finish
 	 */
-	spin_unlock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 
 	return ret;
 }
diff --git a/arch/arm/mach-spear/platsmp.c b/arch/arm/mach-spear/platsmp.c
index 39038a03836ac..6da5c93872bf8 100644
--- a/arch/arm/mach-spear/platsmp.c
+++ b/arch/arm/mach-spear/platsmp.c
@@ -32,7 +32,7 @@ static void write_pen_release(int val)
 	sync_cache_w(&pen_release);
 }
 
-static DEFINE_SPINLOCK(boot_lock);
+static DEFINE_RAW_SPINLOCK(boot_lock);
 
 static void __iomem *scu_base = IOMEM(VA_SCU_BASE);
 
@@ -47,8 +47,8 @@ static void spear13xx_secondary_init(unsigned int cpu)
 	/*
 	 * Synchronise with the boot thread.
 	 */
-	spin_lock(&boot_lock);
-	spin_unlock(&boot_lock);
+	raw_spin_lock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 }
 
 static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle)
@@ -59,7 +59,7 @@ static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle)
 	 * set synchronisation state between this boot processor
 	 * and the secondary one
 	 */
-	spin_lock(&boot_lock);
+	raw_spin_lock(&boot_lock);
 
 	/*
 	 * The secondary processor is waiting to be released from
@@ -84,7 +84,7 @@ static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle)
 	 * now the secondary core is starting up let it run its
 	 * calibrations, then wait for it to finish
 	 */
-	spin_unlock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 
 	return pen_release != -1 ? -ENOSYS : 0;
 }
diff --git a/arch/arm/mach-sti/platsmp.c b/arch/arm/mach-sti/platsmp.c
index 231f19e174365..a3419b7003e6f 100644
--- a/arch/arm/mach-sti/platsmp.c
+++ b/arch/arm/mach-sti/platsmp.c
@@ -35,7 +35,7 @@ static void write_pen_release(int val)
 	sync_cache_w(&pen_release);
 }
 
-static DEFINE_SPINLOCK(boot_lock);
+static DEFINE_RAW_SPINLOCK(boot_lock);
 
 static void sti_secondary_init(unsigned int cpu)
 {
@@ -48,8 +48,8 @@ static void sti_secondary_init(unsigned int cpu)
 	/*
 	 * Synchronise with the boot thread.
 	 */
-	spin_lock(&boot_lock);
-	spin_unlock(&boot_lock);
+	raw_spin_lock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 }
 
 static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle)
@@ -60,7 +60,7 @@ static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle)
 	 * set synchronisation state between this boot processor
 	 * and the secondary one
 	 */
-	spin_lock(&boot_lock);
+	raw_spin_lock(&boot_lock);
 
 	/*
 	 * The secondary processor is waiting to be released from
@@ -91,7 +91,7 @@ static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle)
 	 * now the secondary core is starting up let it run its
 	 * calibrations, then wait for it to finish
 	 */
-	spin_unlock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 
 	return pen_release != -1 ? -ENOSYS : 0;
 }
diff --git a/arch/arm/mach-sunxi/mc_smp.c b/arch/arm/mach-sunxi/mc_smp.c
index b4037b603897d..8a6c872f52b59 100644
--- a/arch/arm/mach-sunxi/mc_smp.c
+++ b/arch/arm/mach-sunxi/mc_smp.c
@@ -367,7 +367,7 @@ static void sunxi_cluster_cache_disable_without_axi(void)
 static int sunxi_mc_smp_cpu_table[SUNXI_NR_CLUSTERS][SUNXI_CPUS_PER_CLUSTER];
 int sunxi_mc_smp_first_comer;
 
-static DEFINE_SPINLOCK(boot_lock);
+static DEFINE_RAW_SPINLOCK(boot_lock);
 
 static bool sunxi_mc_smp_cluster_is_down(unsigned int cluster)
 {
@@ -399,7 +399,7 @@ static int sunxi_mc_smp_boot_secondary(unsigned int l_cpu, struct task_struct *i
 	if (cluster >= SUNXI_NR_CLUSTERS || cpu >= SUNXI_CPUS_PER_CLUSTER)
 		return -EINVAL;
 
-	spin_lock_irq(&boot_lock);
+	raw_spin_lock_irq(&boot_lock);
 
 	if (sunxi_mc_smp_cpu_table[cluster][cpu])
 		goto out;
@@ -417,7 +417,7 @@ static int sunxi_mc_smp_boot_secondary(unsigned int l_cpu, struct task_struct *i
 
 out:
 	sunxi_mc_smp_cpu_table[cluster][cpu]++;
-	spin_unlock_irq(&boot_lock);
+	raw_spin_unlock_irq(&boot_lock);
 
 	return 0;
 }
@@ -448,13 +448,13 @@ static void sunxi_mc_smp_cpu_die(unsigned int l_cpu)
 	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
 	pr_debug("%s: cluster %u cpu %u\n", __func__, cluster, cpu);
 
-	spin_lock(&boot_lock);
+	raw_spin_lock(&boot_lock);
 	sunxi_mc_smp_cpu_table[cluster][cpu]--;
 	if (sunxi_mc_smp_cpu_table[cluster][cpu] == 1) {
 		/* A power_up request went ahead of us. */
 		pr_debug("%s: aborting due to a power up request\n",
 			 __func__);
-		spin_unlock(&boot_lock);
+		raw_spin_unlock(&boot_lock);
 		return;
 	} else if (sunxi_mc_smp_cpu_table[cluster][cpu] > 1) {
 		pr_err("Cluster %d CPU%d boots multiple times\n",
@@ -463,7 +463,7 @@ static void sunxi_mc_smp_cpu_die(unsigned int l_cpu)
 	}
 
 	last_man = sunxi_mc_smp_cluster_is_down(cluster);
-	spin_unlock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 
 	gic_cpu_if_down(0);
 	if (last_man)
@@ -542,11 +542,11 @@ static int sunxi_mc_smp_cpu_kill(unsigned int l_cpu)
 
 	/* wait for CPU core to die and enter WFI */
 	count = TIMEOUT_USEC / POLL_USEC;
-	spin_lock_irq(&boot_lock);
+	raw_spin_lock_irq(&boot_lock);
 	for (tries = 0; tries < count; tries++) {
-		spin_unlock_irq(&boot_lock);
+		raw_spin_unlock_irq(&boot_lock);
 		usleep_range(POLL_USEC / 2, POLL_USEC);
-		spin_lock_irq(&boot_lock);
+		raw_spin_lock_irq(&boot_lock);
 
 		/*
 		 * If the user turns off a bunch of cores at the same
@@ -593,7 +593,7 @@ static int sunxi_mc_smp_cpu_kill(unsigned int l_cpu)
 	sunxi_cluster_powerdown(cluster);
 
 out:
-	spin_unlock_irq(&boot_lock);
+	raw_spin_unlock_irq(&boot_lock);
 	pr_debug("%s: cluster %u cpu %u powerdown: %d\n",
 		 __func__, cluster, cpu, ret);
 	return !ret;
diff --git a/arch/arm/plat-versatile/platsmp.c b/arch/arm/plat-versatile/platsmp.c
index c2366510187a8..6b60f582b738c 100644
--- a/arch/arm/plat-versatile/platsmp.c
+++ b/arch/arm/plat-versatile/platsmp.c
@@ -32,7 +32,7 @@ static void write_pen_release(int val)
 	sync_cache_w(&pen_release);
 }
 
-static DEFINE_SPINLOCK(boot_lock);
+static DEFINE_RAW_SPINLOCK(boot_lock);
 
 void versatile_secondary_init(unsigned int cpu)
 {
@@ -45,8 +45,8 @@ void versatile_secondary_init(unsigned int cpu)
 	/*
 	 * Synchronise with the boot thread.
 	 */
-	spin_lock(&boot_lock);
-	spin_unlock(&boot_lock);
+	raw_spin_lock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 }
 
 int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle)
@@ -57,7 +57,7 @@ int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle)
 	 * Set synchronisation state between this boot processor
 	 * and the secondary one
 	 */
-	spin_lock(&boot_lock);
+	raw_spin_lock(&boot_lock);
 
 	/*
 	 * This is really belt and braces; we hold unintended secondary
@@ -87,7 +87,7 @@ int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle)
 	 * now the secondary core is starting up let it run its
 	 * calibrations, then wait for it to finish
 	 */
-	spin_unlock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 
 	return pen_release != -1 ? -ENOSYS : 0;
 }
-- 
2.20.0.rc2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 2/3] arm: kprobe: make patch_lock a raw_spinlock_t
  2018-12-07 10:27 [PATCH 0/3 REPOST] arm: covert a few spinlock_t locks to raw_spinlock_t Sebastian Andrzej Siewior
  2018-12-07 10:27 ` [PATCH 1/3] arm: Convert arm boot_lock to raw Sebastian Andrzej Siewior
@ 2018-12-07 10:27 ` Sebastian Andrzej Siewior
  2019-02-13  8:46   ` Sebastian Andrzej Siewior
  2018-12-07 10:27 ` [PATCH 3/3] arm: use a raw_spinlock_t in unwind Sebastian Andrzej Siewior
  2 siblings, 1 reply; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-07 10:27 UTC (permalink / raw)
  To: linux-arm-kernel, Arnd Bergmann
  Cc: Yang Shi, tglx, Russell King, Sebastian Andrzej Siewior

From: Yang Shi <yang.shi@linaro.org>

When running kprobe on -rt kernel, the below bug is caught:

|BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:931
|in_atomic(): 1, irqs_disabled(): 128, pid: 14, name: migration/0
|Preemption disabled at:[<802f2b98>] cpu_stopper_thread+0xc0/0x140
|CPU: 0 PID: 14 Comm: migration/0 Tainted: G O 4.8.3-rt2 #1
|Hardware name: Freescale LS1021A
|[<8025a43c>] (___might_sleep)
|[<80b5b324>] (rt_spin_lock)
|[<80b5c31c>] (__patch_text_real)
|[<80b5c3ac>] (patch_text_stop_machine)
|[<802f2920>] (multi_cpu_stop)

Since patch_text_stop_machine() is called in stop_machine() which
disables IRQ, sleepable lock should be not used in this atomic context,
 so replace patch_lock to raw lock.

Signed-off-by: Yang Shi <yang.shi@linaro.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 arch/arm/kernel/patch.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm/kernel/patch.c b/arch/arm/kernel/patch.c
index a50dc00d79a27..d0a05a3bdb965 100644
--- a/arch/arm/kernel/patch.c
+++ b/arch/arm/kernel/patch.c
@@ -16,7 +16,7 @@ struct patch {
 	unsigned int insn;
 };
 
-static DEFINE_SPINLOCK(patch_lock);
+static DEFINE_RAW_SPINLOCK(patch_lock);
 
 static void __kprobes *patch_map(void *addr, int fixmap, unsigned long *flags)
 	__acquires(&patch_lock)
@@ -33,7 +33,7 @@ static void __kprobes *patch_map(void *addr, int fixmap, unsigned long *flags)
 		return addr;
 
 	if (flags)
-		spin_lock_irqsave(&patch_lock, *flags);
+		raw_spin_lock_irqsave(&patch_lock, *flags);
 	else
 		__acquire(&patch_lock);
 
@@ -48,7 +48,7 @@ static void __kprobes patch_unmap(int fixmap, unsigned long *flags)
 	clear_fixmap(fixmap);
 
 	if (flags)
-		spin_unlock_irqrestore(&patch_lock, *flags);
+		raw_spin_unlock_irqrestore(&patch_lock, *flags);
 	else
 		__release(&patch_lock);
 }
-- 
2.20.0.rc2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 3/3] arm: use a raw_spinlock_t in unwind
  2018-12-07 10:27 [PATCH 0/3 REPOST] arm: covert a few spinlock_t locks to raw_spinlock_t Sebastian Andrzej Siewior
  2018-12-07 10:27 ` [PATCH 1/3] arm: Convert arm boot_lock to raw Sebastian Andrzej Siewior
  2018-12-07 10:27 ` [PATCH 2/3] arm: kprobe: make patch_lock a raw_spinlock_t Sebastian Andrzej Siewior
@ 2018-12-07 10:27 ` Sebastian Andrzej Siewior
  2019-02-13  8:46   ` Sebastian Andrzej Siewior
  2 siblings, 1 reply; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-07 10:27 UTC (permalink / raw)
  To: linux-arm-kernel, Arnd Bergmann
  Cc: tglx, Russell King, Sebastian Andrzej Siewior

Mostly unwind is done with irqs enabled however SLUB may call it with
irqs disabled while creating a new SLUB cache.

I had system freeze while loading a module which called
kmem_cache_create() on init. That means SLUB's __slab_alloc() disabled
interrupts and then

->new_slab_objects()
 ->new_slab()
  ->setup_object()
   ->setup_object_debug()
    ->init_tracking()
     ->set_track()
      ->save_stack_trace()
       ->save_stack_trace_tsk()
        ->walk_stackframe()
         ->unwind_frame()
          ->unwind_find_idx()
           =>spin_lock_irqsave(&unwind_lock);

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 arch/arm/kernel/unwind.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
index 0bee233fef9a3..314cfb232a635 100644
--- a/arch/arm/kernel/unwind.c
+++ b/arch/arm/kernel/unwind.c
@@ -93,7 +93,7 @@ extern const struct unwind_idx __start_unwind_idx[];
 static const struct unwind_idx *__origin_unwind_idx;
 extern const struct unwind_idx __stop_unwind_idx[];
 
-static DEFINE_SPINLOCK(unwind_lock);
+static DEFINE_RAW_SPINLOCK(unwind_lock);
 static LIST_HEAD(unwind_tables);
 
 /* Convert a prel31 symbol to an absolute address */
@@ -201,7 +201,7 @@ static const struct unwind_idx *unwind_find_idx(unsigned long addr)
 		/* module unwind tables */
 		struct unwind_table *table;
 
-		spin_lock_irqsave(&unwind_lock, flags);
+		raw_spin_lock_irqsave(&unwind_lock, flags);
 		list_for_each_entry(table, &unwind_tables, list) {
 			if (addr >= table->begin_addr &&
 			    addr < table->end_addr) {
@@ -213,7 +213,7 @@ static const struct unwind_idx *unwind_find_idx(unsigned long addr)
 				break;
 			}
 		}
-		spin_unlock_irqrestore(&unwind_lock, flags);
+		raw_spin_unlock_irqrestore(&unwind_lock, flags);
 	}
 
 	pr_debug("%s: idx = %p\n", __func__, idx);
@@ -529,9 +529,9 @@ struct unwind_table *unwind_table_add(unsigned long start, unsigned long size,
 	tab->begin_addr = text_addr;
 	tab->end_addr = text_addr + text_size;
 
-	spin_lock_irqsave(&unwind_lock, flags);
+	raw_spin_lock_irqsave(&unwind_lock, flags);
 	list_add_tail(&tab->list, &unwind_tables);
-	spin_unlock_irqrestore(&unwind_lock, flags);
+	raw_spin_unlock_irqrestore(&unwind_lock, flags);
 
 	return tab;
 }
@@ -543,9 +543,9 @@ void unwind_table_del(struct unwind_table *tab)
 	if (!tab)
 		return;
 
-	spin_lock_irqsave(&unwind_lock, flags);
+	raw_spin_lock_irqsave(&unwind_lock, flags);
 	list_del(&tab->list);
-	spin_unlock_irqrestore(&unwind_lock, flags);
+	raw_spin_unlock_irqrestore(&unwind_lock, flags);
 
 	kfree(tab);
 }
-- 
2.20.0.rc2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/3] arm: Convert arm boot_lock to raw
  2018-12-07 10:27 ` [PATCH 1/3] arm: Convert arm boot_lock to raw Sebastian Andrzej Siewior
@ 2018-12-07 12:49   ` Linus Walleij
  2018-12-07 13:00   ` Russell King - ARM Linux
  1 sibling, 0 replies; 19+ messages in thread
From: Linus Walleij @ 2018-12-07 12:49 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: ext Tony Lindgren, viresh kumar, David Brown, Xu Wei,
	Manivannan Sadhasivam, linux-samsung-soc, Viresh Kumar,
	Russell King, Krzysztof Kozlowski, Maxime Ripard, Chen-Yu Tsai,
	Kukjin Kim, Andy Gross, Arnd Bergmann, linux-arm-msm,
	Thomas Gleixner, Linux-OMAP, Linux ARM, Barry Song, Frank Rowand,
	Patrice CHOTARD, shiraz hashim, Andreas Färber

On Fri, Dec 7, 2018 at 11:28 AM Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:

> From: Frank Rowand <frank.rowand@am.sony.com>
>
> The arm boot_lock is used by the secondary processor startup code.  The locking
> task is the idle thread, which has idle->sched_class == &idle_sched_class.
> idle_sched_class->enqueue_task == NULL, so if the idle task blocks on the
> lock, the attempt to wake it when the lock becomes available will fail:
>
> try_to_wake_up()
>    ...
>       activate_task()
>          enqueue_task()
>             p->sched_class->enqueue_task(rq, p, flags)
>
> Fix by converting boot_lock to a raw spin lock.

This makes perfect sense.

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/3] arm: Convert arm boot_lock to raw
  2018-12-07 10:27 ` [PATCH 1/3] arm: Convert arm boot_lock to raw Sebastian Andrzej Siewior
  2018-12-07 12:49   ` Linus Walleij
@ 2018-12-07 13:00   ` Russell King - ARM Linux
  2018-12-08 23:15     ` Linus Walleij
  1 sibling, 1 reply; 19+ messages in thread
From: Russell King - ARM Linux @ 2018-12-07 13:00 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Tony Lindgren, Viresh Kumar, Linus Walleij, David Brown, Wei Xu,
	Manivannan Sadhasivam, linux-samsung-soc, Viresh Kumar,
	Krzysztof Kozlowski, Maxime Ripard, Chen-Yu Tsai, Kukjin Kim,
	Andy Gross, Arnd Bergmann, linux-arm-msm, tglx, linux-omap,
	linux-arm-kernel, Barry Song, Frank Rowand, Patrice Chotard,
	Shiraz Hashim, Andreas Färber

On Fri, Dec 07, 2018 at 11:27:47AM +0100, Sebastian Andrzej Siewior wrote:
> From: Frank Rowand <frank.rowand@am.sony.com>
> 
> The arm boot_lock is used by the secondary processor startup code.  The locking
> task is the idle thread, which has idle->sched_class == &idle_sched_class.
> idle_sched_class->enqueue_task == NULL, so if the idle task blocks on the
> lock, the attempt to wake it when the lock becomes available will fail:
> 
> try_to_wake_up()
>    ...
>       activate_task()
>          enqueue_task()
>             p->sched_class->enqueue_task(rq, p, flags)
> 
> Fix by converting boot_lock to a raw spin lock.

I think the bigger question is why are all these platforms using this.
For the ARM development platforms, it's fair as they have no way to
power down the CPUs or reset the other CPUs, and the "boot lock" was
there originally to prevent the delay calibration going awry.  (I've
been saying this for years but obviously people like to stuff their
fingers in their ears.)

It annoys me when people cargo-cult copy code and don't bother trying
to understand it, which is shown clearly in your diffstat.

However, modern platforms do have ways to control the other CPUs, so
the boot lock stuff should not be necessary.

Also note that CPU hotplug and CPU unplug is all synchronised in the
core CPU hotplug code, so spinlocks within the arch specific code to
allegedly serialise what they're doing between CPU death and CPU
bringup are completely redundant.

Rather than trying to fix this up, can we instead try to eliminate as
much of the coping of the Versatile Express etc code as possible?

The only place this should be needed is arch/arm/plat-versatile/platsmp.c

>  arch/arm/mach-actions/platsmp.c   |  6 +++---
>  arch/arm/mach-exynos/platsmp.c    | 12 ++++++------
>  arch/arm/mach-hisi/platmcpm.c     | 22 +++++++++++-----------
>  arch/arm/mach-omap2/omap-smp.c    | 10 +++++-----
>  arch/arm/mach-prima2/platsmp.c    | 10 +++++-----
>  arch/arm/mach-qcom/platsmp.c      | 10 +++++-----
>  arch/arm/mach-spear/platsmp.c     | 10 +++++-----
>  arch/arm/mach-sti/platsmp.c       | 10 +++++-----
>  arch/arm/mach-sunxi/mc_smp.c      | 20 ++++++++++----------
>  arch/arm/plat-versatile/platsmp.c | 10 +++++-----
>  10 files changed, 60 insertions(+), 60 deletions(-)
> 
> diff --git a/arch/arm/mach-actions/platsmp.c b/arch/arm/mach-actions/platsmp.c
> index 3efaa10efc439..770079245d273 100644
> --- a/arch/arm/mach-actions/platsmp.c
> +++ b/arch/arm/mach-actions/platsmp.c
> @@ -39,7 +39,7 @@ static void __iomem *sps_base_addr;
>  static void __iomem *timer_base_addr;
>  static int ncores;
>  
> -static DEFINE_SPINLOCK(boot_lock);
> +static DEFINE_RAW_SPINLOCK(boot_lock);
>  
>  void owl_secondary_startup(void);
>  
> @@ -93,7 +93,7 @@ static int s500_smp_boot_secondary(unsigned int cpu, struct task_struct *idle)
>  
>  	udelay(10);
>  
> -	spin_lock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
>  
>  	smp_send_reschedule(cpu);
>  
> @@ -106,7 +106,7 @@ static int s500_smp_boot_secondary(unsigned int cpu, struct task_struct *idle)
>  	writel(0, timer_base_addr + OWL_CPU1_ADDR + (cpu - 1) * 4);
>  	writel(0, timer_base_addr + OWL_CPU1_FLAG + (cpu - 1) * 4);
>  
> -	spin_unlock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  
>  	return 0;
>  }
> diff --git a/arch/arm/mach-exynos/platsmp.c b/arch/arm/mach-exynos/platsmp.c
> index 6a1e682371b32..17dca0ff336e0 100644
> --- a/arch/arm/mach-exynos/platsmp.c
> +++ b/arch/arm/mach-exynos/platsmp.c
> @@ -239,7 +239,7 @@ static void write_pen_release(int val)
>  	sync_cache_w(&pen_release);
>  }
>  
> -static DEFINE_SPINLOCK(boot_lock);
> +static DEFINE_RAW_SPINLOCK(boot_lock);
>  
>  static void exynos_secondary_init(unsigned int cpu)
>  {
> @@ -252,8 +252,8 @@ static void exynos_secondary_init(unsigned int cpu)
>  	/*
>  	 * Synchronise with the boot thread.
>  	 */
> -	spin_lock(&boot_lock);
> -	spin_unlock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  }
>  
>  int exynos_set_boot_addr(u32 core_id, unsigned long boot_addr)
> @@ -317,7 +317,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle)
>  	 * Set synchronisation state between this boot processor
>  	 * and the secondary one
>  	 */
> -	spin_lock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
>  
>  	/*
>  	 * The secondary processor is waiting to be released from
> @@ -344,7 +344,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle)
>  
>  		if (timeout == 0) {
>  			printk(KERN_ERR "cpu1 power enable failed");
> -			spin_unlock(&boot_lock);
> +			raw_spin_unlock(&boot_lock);
>  			return -ETIMEDOUT;
>  		}
>  	}
> @@ -390,7 +390,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle)
>  	 * calibrations, then wait for it to finish
>  	 */
>  fail:
> -	spin_unlock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  
>  	return pen_release != -1 ? ret : 0;
>  }
> diff --git a/arch/arm/mach-hisi/platmcpm.c b/arch/arm/mach-hisi/platmcpm.c
> index f66815c3dd07e..00524abd963f7 100644
> --- a/arch/arm/mach-hisi/platmcpm.c
> +++ b/arch/arm/mach-hisi/platmcpm.c
> @@ -61,7 +61,7 @@
>  
>  static void __iomem *sysctrl, *fabric;
>  static int hip04_cpu_table[HIP04_MAX_CLUSTERS][HIP04_MAX_CPUS_PER_CLUSTER];
> -static DEFINE_SPINLOCK(boot_lock);
> +static DEFINE_RAW_SPINLOCK(boot_lock);
>  static u32 fabric_phys_addr;
>  /*
>   * [0]: bootwrapper physical address
> @@ -113,7 +113,7 @@ static int hip04_boot_secondary(unsigned int l_cpu, struct task_struct *idle)
>  	if (cluster >= HIP04_MAX_CLUSTERS || cpu >= HIP04_MAX_CPUS_PER_CLUSTER)
>  		return -EINVAL;
>  
> -	spin_lock_irq(&boot_lock);
> +	raw_spin_lock_irq(&boot_lock);
>  
>  	if (hip04_cpu_table[cluster][cpu])
>  		goto out;
> @@ -147,7 +147,7 @@ static int hip04_boot_secondary(unsigned int l_cpu, struct task_struct *idle)
>  
>  out:
>  	hip04_cpu_table[cluster][cpu]++;
> -	spin_unlock_irq(&boot_lock);
> +	raw_spin_unlock_irq(&boot_lock);
>  
>  	return 0;
>  }
> @@ -162,11 +162,11 @@ static void hip04_cpu_die(unsigned int l_cpu)
>  	cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
>  	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
>  
> -	spin_lock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
>  	hip04_cpu_table[cluster][cpu]--;
>  	if (hip04_cpu_table[cluster][cpu] == 1) {
>  		/* A power_up request went ahead of us. */
> -		spin_unlock(&boot_lock);
> +		raw_spin_unlock(&boot_lock);
>  		return;
>  	} else if (hip04_cpu_table[cluster][cpu] > 1) {
>  		pr_err("Cluster %d CPU%d boots multiple times\n", cluster, cpu);
> @@ -174,7 +174,7 @@ static void hip04_cpu_die(unsigned int l_cpu)
>  	}
>  
>  	last_man = hip04_cluster_is_down(cluster);
> -	spin_unlock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  	if (last_man) {
>  		/* Since it's Cortex A15, disable L2 prefetching. */
>  		asm volatile(
> @@ -203,7 +203,7 @@ static int hip04_cpu_kill(unsigned int l_cpu)
>  	       cpu >= HIP04_MAX_CPUS_PER_CLUSTER);
>  
>  	count = TIMEOUT_MSEC / POLL_MSEC;
> -	spin_lock_irq(&boot_lock);
> +	raw_spin_lock_irq(&boot_lock);
>  	for (tries = 0; tries < count; tries++) {
>  		if (hip04_cpu_table[cluster][cpu])
>  			goto err;
> @@ -211,10 +211,10 @@ static int hip04_cpu_kill(unsigned int l_cpu)
>  		data = readl_relaxed(sysctrl + SC_CPU_RESET_STATUS(cluster));
>  		if (data & CORE_WFI_STATUS(cpu))
>  			break;
> -		spin_unlock_irq(&boot_lock);
> +		raw_spin_unlock_irq(&boot_lock);
>  		/* Wait for clean L2 when the whole cluster is down. */
>  		msleep(POLL_MSEC);
> -		spin_lock_irq(&boot_lock);
> +		raw_spin_lock_irq(&boot_lock);
>  	}
>  	if (tries >= count)
>  		goto err;
> @@ -231,10 +231,10 @@ static int hip04_cpu_kill(unsigned int l_cpu)
>  		goto err;
>  	if (hip04_cluster_is_down(cluster))
>  		hip04_set_snoop_filter(cluster, 0);
> -	spin_unlock_irq(&boot_lock);
> +	raw_spin_unlock_irq(&boot_lock);
>  	return 1;
>  err:
> -	spin_unlock_irq(&boot_lock);
> +	raw_spin_unlock_irq(&boot_lock);
>  	return 0;
>  }
>  #endif
> diff --git a/arch/arm/mach-omap2/omap-smp.c b/arch/arm/mach-omap2/omap-smp.c
> index 1c73694c871ad..ac4d2f030b874 100644
> --- a/arch/arm/mach-omap2/omap-smp.c
> +++ b/arch/arm/mach-omap2/omap-smp.c
> @@ -69,7 +69,7 @@ static const struct omap_smp_config omap5_cfg __initconst = {
>  	.startup_addr = omap5_secondary_startup,
>  };
>  
> -static DEFINE_SPINLOCK(boot_lock);
> +static DEFINE_RAW_SPINLOCK(boot_lock);
>  
>  void __iomem *omap4_get_scu_base(void)
>  {
> @@ -177,8 +177,8 @@ static void omap4_secondary_init(unsigned int cpu)
>  	/*
>  	 * Synchronise with the boot thread.
>  	 */
> -	spin_lock(&boot_lock);
> -	spin_unlock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  }
>  
>  static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle)
> @@ -191,7 +191,7 @@ static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle)
>  	 * Set synchronisation state between this boot processor
>  	 * and the secondary one
>  	 */
> -	spin_lock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
>  
>  	/*
>  	 * Update the AuxCoreBoot0 with boot state for secondary core.
> @@ -270,7 +270,7 @@ static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle)
>  	 * Now the secondary core is starting up let it run its
>  	 * calibrations, then wait for it to finish
>  	 */
> -	spin_unlock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  
>  	return 0;
>  }
> diff --git a/arch/arm/mach-prima2/platsmp.c b/arch/arm/mach-prima2/platsmp.c
> index 75ef5d4be554c..c17c86e5d860a 100644
> --- a/arch/arm/mach-prima2/platsmp.c
> +++ b/arch/arm/mach-prima2/platsmp.c
> @@ -22,7 +22,7 @@
>  
>  static void __iomem *clk_base;
>  
> -static DEFINE_SPINLOCK(boot_lock);
> +static DEFINE_RAW_SPINLOCK(boot_lock);
>  
>  static void sirfsoc_secondary_init(unsigned int cpu)
>  {
> @@ -36,8 +36,8 @@ static void sirfsoc_secondary_init(unsigned int cpu)
>  	/*
>  	 * Synchronise with the boot thread.
>  	 */
> -	spin_lock(&boot_lock);
> -	spin_unlock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  }
>  
>  static const struct of_device_id clk_ids[]  = {
> @@ -75,7 +75,7 @@ static int sirfsoc_boot_secondary(unsigned int cpu, struct task_struct *idle)
>  	/* make sure write buffer is drained */
>  	mb();
>  
> -	spin_lock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
>  
>  	/*
>  	 * The secondary processor is waiting to be released from
> @@ -107,7 +107,7 @@ static int sirfsoc_boot_secondary(unsigned int cpu, struct task_struct *idle)
>  	 * now the secondary core is starting up let it run its
>  	 * calibrations, then wait for it to finish
>  	 */
> -	spin_unlock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  
>  	return pen_release != -1 ? -ENOSYS : 0;
>  }
> diff --git a/arch/arm/mach-qcom/platsmp.c b/arch/arm/mach-qcom/platsmp.c
> index 5494c9e0c909b..e8ce157d3548a 100644
> --- a/arch/arm/mach-qcom/platsmp.c
> +++ b/arch/arm/mach-qcom/platsmp.c
> @@ -46,7 +46,7 @@
>  
>  extern void secondary_startup_arm(void);
>  
> -static DEFINE_SPINLOCK(boot_lock);
> +static DEFINE_RAW_SPINLOCK(boot_lock);
>  
>  #ifdef CONFIG_HOTPLUG_CPU
>  static void qcom_cpu_die(unsigned int cpu)
> @@ -60,8 +60,8 @@ static void qcom_secondary_init(unsigned int cpu)
>  	/*
>  	 * Synchronise with the boot thread.
>  	 */
> -	spin_lock(&boot_lock);
> -	spin_unlock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  }
>  
>  static int scss_release_secondary(unsigned int cpu)
> @@ -284,7 +284,7 @@ static int qcom_boot_secondary(unsigned int cpu, int (*func)(unsigned int))
>  	 * set synchronisation state between this boot processor
>  	 * and the secondary one
>  	 */
> -	spin_lock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
>  
>  	/*
>  	 * Send the secondary CPU a soft interrupt, thereby causing
> @@ -297,7 +297,7 @@ static int qcom_boot_secondary(unsigned int cpu, int (*func)(unsigned int))
>  	 * now the secondary core is starting up let it run its
>  	 * calibrations, then wait for it to finish
>  	 */
> -	spin_unlock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  
>  	return ret;
>  }
> diff --git a/arch/arm/mach-spear/platsmp.c b/arch/arm/mach-spear/platsmp.c
> index 39038a03836ac..6da5c93872bf8 100644
> --- a/arch/arm/mach-spear/platsmp.c
> +++ b/arch/arm/mach-spear/platsmp.c
> @@ -32,7 +32,7 @@ static void write_pen_release(int val)
>  	sync_cache_w(&pen_release);
>  }
>  
> -static DEFINE_SPINLOCK(boot_lock);
> +static DEFINE_RAW_SPINLOCK(boot_lock);
>  
>  static void __iomem *scu_base = IOMEM(VA_SCU_BASE);
>  
> @@ -47,8 +47,8 @@ static void spear13xx_secondary_init(unsigned int cpu)
>  	/*
>  	 * Synchronise with the boot thread.
>  	 */
> -	spin_lock(&boot_lock);
> -	spin_unlock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  }
>  
>  static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle)
> @@ -59,7 +59,7 @@ static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle)
>  	 * set synchronisation state between this boot processor
>  	 * and the secondary one
>  	 */
> -	spin_lock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
>  
>  	/*
>  	 * The secondary processor is waiting to be released from
> @@ -84,7 +84,7 @@ static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle)
>  	 * now the secondary core is starting up let it run its
>  	 * calibrations, then wait for it to finish
>  	 */
> -	spin_unlock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  
>  	return pen_release != -1 ? -ENOSYS : 0;
>  }
> diff --git a/arch/arm/mach-sti/platsmp.c b/arch/arm/mach-sti/platsmp.c
> index 231f19e174365..a3419b7003e6f 100644
> --- a/arch/arm/mach-sti/platsmp.c
> +++ b/arch/arm/mach-sti/platsmp.c
> @@ -35,7 +35,7 @@ static void write_pen_release(int val)
>  	sync_cache_w(&pen_release);
>  }
>  
> -static DEFINE_SPINLOCK(boot_lock);
> +static DEFINE_RAW_SPINLOCK(boot_lock);
>  
>  static void sti_secondary_init(unsigned int cpu)
>  {
> @@ -48,8 +48,8 @@ static void sti_secondary_init(unsigned int cpu)
>  	/*
>  	 * Synchronise with the boot thread.
>  	 */
> -	spin_lock(&boot_lock);
> -	spin_unlock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  }
>  
>  static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle)
> @@ -60,7 +60,7 @@ static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle)
>  	 * set synchronisation state between this boot processor
>  	 * and the secondary one
>  	 */
> -	spin_lock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
>  
>  	/*
>  	 * The secondary processor is waiting to be released from
> @@ -91,7 +91,7 @@ static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle)
>  	 * now the secondary core is starting up let it run its
>  	 * calibrations, then wait for it to finish
>  	 */
> -	spin_unlock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  
>  	return pen_release != -1 ? -ENOSYS : 0;
>  }
> diff --git a/arch/arm/mach-sunxi/mc_smp.c b/arch/arm/mach-sunxi/mc_smp.c
> index b4037b603897d..8a6c872f52b59 100644
> --- a/arch/arm/mach-sunxi/mc_smp.c
> +++ b/arch/arm/mach-sunxi/mc_smp.c
> @@ -367,7 +367,7 @@ static void sunxi_cluster_cache_disable_without_axi(void)
>  static int sunxi_mc_smp_cpu_table[SUNXI_NR_CLUSTERS][SUNXI_CPUS_PER_CLUSTER];
>  int sunxi_mc_smp_first_comer;
>  
> -static DEFINE_SPINLOCK(boot_lock);
> +static DEFINE_RAW_SPINLOCK(boot_lock);
>  
>  static bool sunxi_mc_smp_cluster_is_down(unsigned int cluster)
>  {
> @@ -399,7 +399,7 @@ static int sunxi_mc_smp_boot_secondary(unsigned int l_cpu, struct task_struct *i
>  	if (cluster >= SUNXI_NR_CLUSTERS || cpu >= SUNXI_CPUS_PER_CLUSTER)
>  		return -EINVAL;
>  
> -	spin_lock_irq(&boot_lock);
> +	raw_spin_lock_irq(&boot_lock);
>  
>  	if (sunxi_mc_smp_cpu_table[cluster][cpu])
>  		goto out;
> @@ -417,7 +417,7 @@ static int sunxi_mc_smp_boot_secondary(unsigned int l_cpu, struct task_struct *i
>  
>  out:
>  	sunxi_mc_smp_cpu_table[cluster][cpu]++;
> -	spin_unlock_irq(&boot_lock);
> +	raw_spin_unlock_irq(&boot_lock);
>  
>  	return 0;
>  }
> @@ -448,13 +448,13 @@ static void sunxi_mc_smp_cpu_die(unsigned int l_cpu)
>  	cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
>  	pr_debug("%s: cluster %u cpu %u\n", __func__, cluster, cpu);
>  
> -	spin_lock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
>  	sunxi_mc_smp_cpu_table[cluster][cpu]--;
>  	if (sunxi_mc_smp_cpu_table[cluster][cpu] == 1) {
>  		/* A power_up request went ahead of us. */
>  		pr_debug("%s: aborting due to a power up request\n",
>  			 __func__);
> -		spin_unlock(&boot_lock);
> +		raw_spin_unlock(&boot_lock);
>  		return;
>  	} else if (sunxi_mc_smp_cpu_table[cluster][cpu] > 1) {
>  		pr_err("Cluster %d CPU%d boots multiple times\n",
> @@ -463,7 +463,7 @@ static void sunxi_mc_smp_cpu_die(unsigned int l_cpu)
>  	}
>  
>  	last_man = sunxi_mc_smp_cluster_is_down(cluster);
> -	spin_unlock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  
>  	gic_cpu_if_down(0);
>  	if (last_man)
> @@ -542,11 +542,11 @@ static int sunxi_mc_smp_cpu_kill(unsigned int l_cpu)
>  
>  	/* wait for CPU core to die and enter WFI */
>  	count = TIMEOUT_USEC / POLL_USEC;
> -	spin_lock_irq(&boot_lock);
> +	raw_spin_lock_irq(&boot_lock);
>  	for (tries = 0; tries < count; tries++) {
> -		spin_unlock_irq(&boot_lock);
> +		raw_spin_unlock_irq(&boot_lock);
>  		usleep_range(POLL_USEC / 2, POLL_USEC);
> -		spin_lock_irq(&boot_lock);
> +		raw_spin_lock_irq(&boot_lock);
>  
>  		/*
>  		 * If the user turns off a bunch of cores at the same
> @@ -593,7 +593,7 @@ static int sunxi_mc_smp_cpu_kill(unsigned int l_cpu)
>  	sunxi_cluster_powerdown(cluster);
>  
>  out:
> -	spin_unlock_irq(&boot_lock);
> +	raw_spin_unlock_irq(&boot_lock);
>  	pr_debug("%s: cluster %u cpu %u powerdown: %d\n",
>  		 __func__, cluster, cpu, ret);
>  	return !ret;
> diff --git a/arch/arm/plat-versatile/platsmp.c b/arch/arm/plat-versatile/platsmp.c
> index c2366510187a8..6b60f582b738c 100644
> --- a/arch/arm/plat-versatile/platsmp.c
> +++ b/arch/arm/plat-versatile/platsmp.c
> @@ -32,7 +32,7 @@ static void write_pen_release(int val)
>  	sync_cache_w(&pen_release);
>  }
>  
> -static DEFINE_SPINLOCK(boot_lock);
> +static DEFINE_RAW_SPINLOCK(boot_lock);
>  
>  void versatile_secondary_init(unsigned int cpu)
>  {
> @@ -45,8 +45,8 @@ void versatile_secondary_init(unsigned int cpu)
>  	/*
>  	 * Synchronise with the boot thread.
>  	 */
> -	spin_lock(&boot_lock);
> -	spin_unlock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  }
>  
>  int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle)
> @@ -57,7 +57,7 @@ int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle)
>  	 * Set synchronisation state between this boot processor
>  	 * and the secondary one
>  	 */
> -	spin_lock(&boot_lock);
> +	raw_spin_lock(&boot_lock);
>  
>  	/*
>  	 * This is really belt and braces; we hold unintended secondary
> @@ -87,7 +87,7 @@ int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle)
>  	 * now the secondary core is starting up let it run its
>  	 * calibrations, then wait for it to finish
>  	 */
> -	spin_unlock(&boot_lock);
> +	raw_spin_unlock(&boot_lock);
>  
>  	return pen_release != -1 ? -ENOSYS : 0;
>  }
> -- 
> 2.20.0.rc2
> 

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/3] arm: Convert arm boot_lock to raw
  2018-12-07 13:00   ` Russell King - ARM Linux
@ 2018-12-08 23:15     ` Linus Walleij
  2018-12-09  0:41       ` Russell King - ARM Linux
  0 siblings, 1 reply; 19+ messages in thread
From: Linus Walleij @ 2018-12-08 23:15 UTC (permalink / raw)
  To: Russell King
  Cc: ext Tony Lindgren, viresh kumar, Sebastian Andrzej Siewior,
	David Brown, Xu Wei, Manivannan Sadhasivam, linux-samsung-soc,
	Viresh Kumar, Krzysztof Kozlowski, Maxime Ripard, Chen-Yu Tsai,
	Kukjin Kim, Andy Gross, Arnd Bergmann, linux-arm-msm,
	Thomas Gleixner, Linux-OMAP, Linux ARM, Barry Song, Frank Rowand,
	Patrice CHOTARD, shiraz hashim, Andreas Färber

On Fri, Dec 7, 2018 at 2:00 PM Russell King - ARM Linux
<linux@armlinux.org.uk> wrote:

> I think the bigger question is why are all these platforms using this.
> For the ARM development platforms, it's fair as they have no way to
> power down the CPUs or reset the other CPUs, and the "boot lock" was
> there originally to prevent the delay calibration going awry.  (I've
> been saying this for years but obviously people like to stuff their
> fingers in their ears.)
>
> It annoys me when people cargo-cult copy code and don't bother trying
> to understand it, which is shown clearly in your diffstat.

It's a very good point.

I did away with "my" culprit at one point in
commit c00def71efd9
"ARM: ux500: simplify secondary CPU boot"
so people could look at that for an example to see how pointless
this "pen holding" (what does it even mean, I never figured it
out?) and boot_lock is.

I thought it was just a few platforms around the time the first
SMP systems were arriving that were doing this but as you point
out, recent newcomers are doing this, such as sti, actions,
qcom, hisi, sunxi.

I would encourage other platforms that have this "write some
magic stuff in some register to boot cpu n" to look at
arch/arm/mach-ux500/platsmp.c, something like that should be
all you need, no special assembly, no "pen", no "boot lock".

Look up resources etc in .smp_prepare_cpus(), make sure to
call set_cpu_possible() for each available CPU, then  just
kick in the CPU n in .smp_boot_secondary() by doing your platform
magic. Optionally cpu_die()
if you need special code or just put in a wfi() for that. That's all.

Yours,
Linus Walleij

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/3] arm: Convert arm boot_lock to raw
  2018-12-08 23:15     ` Linus Walleij
@ 2018-12-09  0:41       ` Russell King - ARM Linux
  2018-12-10 14:37         ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 19+ messages in thread
From: Russell King - ARM Linux @ 2018-12-09  0:41 UTC (permalink / raw)
  To: Linus Walleij
  Cc: ext Tony Lindgren, viresh kumar, Sebastian Andrzej Siewior,
	David Brown, Xu Wei, Manivannan Sadhasivam, linux-samsung-soc,
	Viresh Kumar, Krzysztof Kozlowski, Maxime Ripard, Chen-Yu Tsai,
	Kukjin Kim, Andy Gross, Arnd Bergmann, linux-arm-msm,
	Thomas Gleixner, Linux-OMAP, Linux ARM, Barry Song, Frank Rowand,
	Patrice CHOTARD, shiraz hashim, Andreas Färber

On Sun, Dec 09, 2018 at 12:15:35AM +0100, Linus Walleij wrote:
> On Fri, Dec 7, 2018 at 2:00 PM Russell King - ARM Linux
> <linux@armlinux.org.uk> wrote:
> 
> > I think the bigger question is why are all these platforms using this.
> > For the ARM development platforms, it's fair as they have no way to
> > power down the CPUs or reset the other CPUs, and the "boot lock" was
> > there originally to prevent the delay calibration going awry.  (I've
> > been saying this for years but obviously people like to stuff their
> > fingers in their ears.)
> >
> > It annoys me when people cargo-cult copy code and don't bother trying
> > to understand it, which is shown clearly in your diffstat.
> 
> It's a very good point.
> 
> I did away with "my" culprit at one point in
> commit c00def71efd9
> "ARM: ux500: simplify secondary CPU boot"
> so people could look at that for an example to see how pointless
> this "pen holding" (what does it even mean, I never figured it
> out?) and boot_lock is.

It's a "holding pen" - in normal parlence, it's a place where livestock
are temporarily confined.

In this case, our livestock are CPUs, and they are temporarily confined
in a tight loop.  Early ARM development boards did not have a way to
wake individual secondary CPUs from the boot loader, so the only way to
boot them as Linux wanted was to direct the boot loader to release all
CPUs into a "holding pen" and then release them from the holding pen
one at a time when Linux wanted a secondary CPU to come online.

The early systems also did not have very good bandwidth between the
CPUs and memory, which meant that the CPU requesting another core to
be plugged in would perturb the secondary core's delay calibration to
a noticable amount, and trapping the requesting core in a spinlock
would prevent the delay calibration going awry.

So, really _both_ these things are really really specific to ARM
development platforms, and have nothing to do with modern production
hardware.

No one should _ever_ copy the ARM reference platform SMP hotplug
code.  Needing that code is quite simply a sign that the platform is
quite simply not production hardware.

I really don't get how we've ended up with so many copies of this.
Maybe the code isn't being properly reviewed?  Maybe the process for
merging new platforms is way too lenient?  Whatever, the fact that
we're ending up with new copies of the pen release and boot lock
stuff demonstrably illustrates that the review process for new
platforms is very broken.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/3] arm: Convert arm boot_lock to raw
  2018-12-09  0:41       ` Russell King - ARM Linux
@ 2018-12-10 14:37         ` Sebastian Andrzej Siewior
  2018-12-11 18:19           ` Linus Walleij
  0 siblings, 1 reply; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-10 14:37 UTC (permalink / raw)
  To: Russell King - ARM Linux, Arnd Bergmann, Thomas Gleixner
  Cc: linux-arm-msm, Barry Song, linux-samsung-soc, ext Tony Lindgren,
	Frank Rowand, Linus Walleij, Patrice CHOTARD,
	Krzysztof Kozlowski, David Brown, Chen-Yu Tsai, Kukjin Kim,
	viresh kumar, Xu Wei, Manivannan Sadhasivam, Andy Gross,
	Maxime Ripard, Linux-OMAP, shiraz hashim, Andreas Färber,
	Linux ARM, Viresh Kumar

On 2018-12-09 00:41:39 [+0000], Russell King - ARM Linux wrote:
> So, really _both_ these things are really really specific to ARM
> development platforms, and have nothing to do with modern production
> hardware.
> 
> No one should _ever_ copy the ARM reference platform SMP hotplug
> code.  Needing that code is quite simply a sign that the platform is
> quite simply not production hardware.
> 
> I really don't get how we've ended up with so many copies of this.
> Maybe the code isn't being properly reviewed?  Maybe the process for
> merging new platforms is way too lenient?  Whatever, the fact that
> we're ending up with new copies of the pen release and boot lock
> stuff demonstrably illustrates that the review process for new
> platforms is very broken.

Okay. So the whole patch won't get applied, right? If so, may I please
get the
	arch/arm/plat-versatile/platsmp.c

hunk applied? The remaining platforms would have then either justify why
they have the same problems like the ARM devel board or rework their
code.

Sebastian

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/3] arm: Convert arm boot_lock to raw
  2018-12-10 14:37         ` Sebastian Andrzej Siewior
@ 2018-12-11 18:19           ` Linus Walleij
  2018-12-11 21:23             ` [PATCH 1/3 v2] arm: versatile: Convert " Sebastian Andrzej Siewior
                               ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: Linus Walleij @ 2018-12-11 18:19 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: ext Tony Lindgren, viresh kumar, David Brown, Xu Wei,
	Manivannan Sadhasivam, linux-samsung-soc, Viresh Kumar,
	Russell King, Krzysztof Kozlowski, Maxime Ripard, Chen-Yu Tsai,
	Kukjin Kim, Andy Gross, Arnd Bergmann, linux-arm-msm,
	Thomas Gleixner, Linux-OMAP, Linux ARM, Barry Song, Frank Rowand,
	Patrice CHOTARD, shiraz hashim, Andreas Färber

On Mon, Dec 10, 2018 at 3:37 PM Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:

> Okay. So the whole patch won't get applied, right? If so, may I please
> get the
>         arch/arm/plat-versatile/platsmp.c
>
> hunk applied?

Can you please send one patch per mach-* plat-*?

I can apply the plat-versatile patch and send upstream.

> The remaining platforms would have then either justify why
> they have the same problems like the ARM devel board or rework their
> code.

I sent a patch for Actions semiconductor today:
https://marc.info/?l=linux-arm-kernel&m=154451577304234&w=2

I guess I will keep on patching when I have time.
It would be even nicer if platform maintainers saw the
problem and did it themselves, but that doesn't always
work so I usually poke them with a patch when I want
something to improve.

Yours,
Linus Walleij

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/3 v2] arm: versatile: Convert boot_lock to raw
  2018-12-11 18:19           ` Linus Walleij
@ 2018-12-11 21:23             ` Sebastian Andrzej Siewior
  2018-12-11 21:29             ` [PATCH 1/3] arm: Convert arm " Sebastian Andrzej Siewior
  2018-12-12 11:15             ` Russell King - ARM Linux
  2 siblings, 0 replies; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-11 21:23 UTC (permalink / raw)
  To: Linus Walleij
  Cc: ext Tony Lindgren, viresh kumar, David Brown, Xu Wei,
	Manivannan Sadhasivam, linux-samsung-soc, Viresh Kumar,
	Russell King, Krzysztof Kozlowski, Maxime Ripard, Chen-Yu Tsai,
	Kukjin Kim, Andy Gross, Arnd Bergmann, linux-arm-msm,
	Thomas Gleixner, Linux-OMAP, Linux ARM, Barry Song, Frank Rowand,
	Patrice CHOTARD, shiraz hashim, Andreas Färber

The arm boot_lock is used by the secondary processor startup code.  The locking
task is the idle thread, which has idle->sched_class == &idle_sched_class.
idle_sched_class->enqueue_task == NULL, so if the idle task blocks on the
lock, the attempt to wake it when the lock becomes available will fail:

try_to_wake_up()
   ...
      activate_task()
         enqueue_task()
            p->sched_class->enqueue_task(rq, p, flags)

Fix by converting boot_lock to a raw spin lock.

Cc: Russell King <linux@armlinux.org.uk>
Cc: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Frank Rowand <frank.rowand@am.sony.com>
Link: http://lkml.kernel.org/r/4E77B952.3010606@am.sony.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 arch/arm/plat-versatile/platsmp.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm/plat-versatile/platsmp.c b/arch/arm/plat-versatile/platsmp.c
index c2366510187a8..6b60f582b738c 100644
--- a/arch/arm/plat-versatile/platsmp.c
+++ b/arch/arm/plat-versatile/platsmp.c
@@ -32,7 +32,7 @@ static void write_pen_release(int val)
 	sync_cache_w(&pen_release);
 }
 
-static DEFINE_SPINLOCK(boot_lock);
+static DEFINE_RAW_SPINLOCK(boot_lock);
 
 void versatile_secondary_init(unsigned int cpu)
 {
@@ -45,8 +45,8 @@ void versatile_secondary_init(unsigned int cpu)
 	/*
 	 * Synchronise with the boot thread.
 	 */
-	spin_lock(&boot_lock);
-	spin_unlock(&boot_lock);
+	raw_spin_lock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 }
 
 int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle)
@@ -57,7 +57,7 @@ int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle)
 	 * Set synchronisation state between this boot processor
 	 * and the secondary one
 	 */
-	spin_lock(&boot_lock);
+	raw_spin_lock(&boot_lock);
 
 	/*
 	 * This is really belt and braces; we hold unintended secondary
@@ -87,7 +87,7 @@ int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle)
 	 * now the secondary core is starting up let it run its
 	 * calibrations, then wait for it to finish
 	 */
-	spin_unlock(&boot_lock);
+	raw_spin_unlock(&boot_lock);
 
 	return pen_release != -1 ? -ENOSYS : 0;
 }
-- 
2.20.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/3] arm: Convert arm boot_lock to raw
  2018-12-11 18:19           ` Linus Walleij
  2018-12-11 21:23             ` [PATCH 1/3 v2] arm: versatile: Convert " Sebastian Andrzej Siewior
@ 2018-12-11 21:29             ` Sebastian Andrzej Siewior
  2018-12-13 11:48               ` Russell King - ARM Linux
  2018-12-12 11:15             ` Russell King - ARM Linux
  2 siblings, 1 reply; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-11 21:29 UTC (permalink / raw)
  To: Linus Walleij
  Cc: ext Tony Lindgren, viresh kumar, David Brown, Xu Wei,
	Manivannan Sadhasivam, linux-samsung-soc, Viresh Kumar,
	Russell King, Krzysztof Kozlowski, Maxime Ripard, Chen-Yu Tsai,
	Kukjin Kim, Andy Gross, Arnd Bergmann, linux-arm-msm,
	Thomas Gleixner, Linux-OMAP, Linux ARM, Barry Song, Frank Rowand,
	Patrice CHOTARD, shiraz hashim, Andreas Färber

On 2018-12-11 19:19:32 [+0100], Linus Walleij wrote:
> Can you please send one patch per mach-* plat-*?
> 
> I can apply the plat-versatile patch and send upstream.

I just resent the versatile hunk. Should I really split up and resend
the other platforms? Russell was not very amused about it.

> Yours,
> Linus Walleij

Sebastian

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/3] arm: Convert arm boot_lock to raw
  2018-12-11 18:19           ` Linus Walleij
  2018-12-11 21:23             ` [PATCH 1/3 v2] arm: versatile: Convert " Sebastian Andrzej Siewior
  2018-12-11 21:29             ` [PATCH 1/3] arm: Convert arm " Sebastian Andrzej Siewior
@ 2018-12-12 11:15             ` Russell King - ARM Linux
  2 siblings, 0 replies; 19+ messages in thread
From: Russell King - ARM Linux @ 2018-12-12 11:15 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Maxime Ripard, viresh kumar, Sebastian Andrzej Siewior,
	David Brown, Krzysztof Kozlowski, Manivannan Sadhasivam,
	linux-samsung-soc, Viresh Kumar, Xu Wei, ext Tony Lindgren,
	Chen-Yu Tsai, Kukjin Kim, Andy Gross, Arnd Bergmann,
	linux-arm-msm, Thomas Gleixner, Linux-OMAP, Linux ARM,
	Barry Song, Frank Rowand, Patrice CHOTARD, shiraz hashim,
	Andreas Färber

On Tue, Dec 11, 2018 at 07:19:32PM +0100, Linus Walleij wrote:
> On Mon, Dec 10, 2018 at 3:37 PM Sebastian Andrzej Siewior
> <bigeasy@linutronix.de> wrote:
> 
> > Okay. So the whole patch won't get applied, right? If so, may I please
> > get the
> >         arch/arm/plat-versatile/platsmp.c
> >
> > hunk applied?
> 
> Can you please send one patch per mach-* plat-*?

I think just patch arch/arm/plat-versatile/platsmp.c and ignore the
rest - as I say, the right solution is to eliminate the lock.

I have a patch to simply remove the lock from omap2+.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/3] arm: Convert arm boot_lock to raw
  2018-12-11 21:29             ` [PATCH 1/3] arm: Convert arm " Sebastian Andrzej Siewior
@ 2018-12-13 11:48               ` Russell King - ARM Linux
  2018-12-13 12:42                 ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 19+ messages in thread
From: Russell King - ARM Linux @ 2018-12-13 11:48 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Maxime Ripard, viresh kumar, Linus Walleij, David Brown,
	Krzysztof Kozlowski, Manivannan Sadhasivam, linux-samsung-soc,
	Viresh Kumar, Xu Wei, ext Tony Lindgren, Chen-Yu Tsai,
	Kukjin Kim, Andy Gross, Arnd Bergmann, linux-arm-msm,
	Thomas Gleixner, Linux-OMAP, Linux ARM, Barry Song, Frank Rowand,
	Patrice CHOTARD, shiraz hashim, Andreas Färber

On Tue, Dec 11, 2018 at 10:29:30PM +0100, Sebastian Andrzej Siewior wrote:
> On 2018-12-11 19:19:32 [+0100], Linus Walleij wrote:
> > Can you please send one patch per mach-* plat-*?
> > 
> > I can apply the plat-versatile patch and send upstream.
> 
> I just resent the versatile hunk. Should I really split up and resend
> the other platforms? Russell was not very amused about it.

Just the Versatile bit, and no I don't see any point in fixing the
other SoCs - it's an incentive for them to fix up their code.

However, given that hardly any have responded, pointing out this
problem in email clearly hasn't worked by the resounding silence
from SoC maintainers.

Removing boot_lock from those platforms who just cargo-culted the
boot_lock including comments without the pen_release stuff is easy,
all the lock is doing there is providing a bit of unnecessary
synchronisation between the requesting CPU and the booting CPU.
Only a couple of SoCs fall into this case - OMAP4 and qcom.

The rest aren't something that could just be ripped out without
the involvement of the SoC maintainer to test the changes - so
I'm of the opinion that we should just send SoC maintainers
patches that remove the pen_release and boot_lock crap.  If they
blindly accept the patches, so be it.

I'm preparing a series to attack some of this.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/3] arm: Convert arm boot_lock to raw
  2018-12-13 11:48               ` Russell King - ARM Linux
@ 2018-12-13 12:42                 ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-13 12:42 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Maxime Ripard, viresh kumar, Linus Walleij, David Brown,
	Krzysztof Kozlowski, Manivannan Sadhasivam, linux-samsung-soc,
	Viresh Kumar, Xu Wei, ext Tony Lindgren, Chen-Yu Tsai,
	Kukjin Kim, Andy Gross, Arnd Bergmann, linux-arm-msm,
	Thomas Gleixner, Linux-OMAP, Linux ARM, Barry Song, Frank Rowand,
	Patrice CHOTARD, shiraz hashim, Andreas Färber

On 2018-12-13 11:48:42 [+0000], Russell King - ARM Linux wrote:
> On Tue, Dec 11, 2018 at 10:29:30PM +0100, Sebastian Andrzej Siewior wrote:
> > I just resent the versatile hunk. Should I really split up and resend
> > the other platforms? Russell was not very amused about it.
> 
> Just the Versatile bit, and no I don't see any point in fixing the
> other SoCs - it's an incentive for them to fix up their code.

Okay. I already resent that.

> The rest aren't something that could just be ripped out without
> the involvement of the SoC maintainer to test the changes - so
> I'm of the opinion that we should just send SoC maintainers
> patches that remove the pen_release and boot_lock crap.  If they
> blindly accept the patches, so be it.
> 
> I'm preparing a series to attack some of this.

Thank you.

Sebastian

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/3] arm: kprobe: make patch_lock a raw_spinlock_t
  2018-12-07 10:27 ` [PATCH 2/3] arm: kprobe: make patch_lock a raw_spinlock_t Sebastian Andrzej Siewior
@ 2019-02-13  8:46   ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2019-02-13  8:46 UTC (permalink / raw)
  To: linux-arm-kernel, Arnd Bergmann; +Cc: Yang Shi, tglx, Russell King

On 2018-12-07 11:27:48 [+0100], To linux-arm-kernel@lists.infradead.org wrote:
> From: Yang Shi <yang.shi@linaro.org>
> 
> When running kprobe on -rt kernel, the below bug is caught:
> 
> |BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:931
> |in_atomic(): 1, irqs_disabled(): 128, pid: 14, name: migration/0
> |Preemption disabled at:[<802f2b98>] cpu_stopper_thread+0xc0/0x140
> |CPU: 0 PID: 14 Comm: migration/0 Tainted: G O 4.8.3-rt2 #1
> |Hardware name: Freescale LS1021A
> |[<8025a43c>] (___might_sleep)
> |[<80b5b324>] (rt_spin_lock)
> |[<80b5c31c>] (__patch_text_real)
> |[<80b5c3ac>] (patch_text_stop_machine)
> |[<802f2920>] (multi_cpu_stop)
> 
> Since patch_text_stop_machine() is called in stop_machine() which
> disables IRQ, sleepable lock should be not used in this atomic context,
>  so replace patch_lock to raw lock.
> 
> Signed-off-by: Yang Shi <yang.shi@linaro.org>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
ping

Sebastian

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/3] arm: use a raw_spinlock_t in unwind
  2018-12-07 10:27 ` [PATCH 3/3] arm: use a raw_spinlock_t in unwind Sebastian Andrzej Siewior
@ 2019-02-13  8:46   ` Sebastian Andrzej Siewior
  2019-02-13 15:14     ` Arnd Bergmann
  0 siblings, 1 reply; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2019-02-13  8:46 UTC (permalink / raw)
  To: linux-arm-kernel, Arnd Bergmann; +Cc: tglx, Russell King

On 2018-12-07 11:27:49 [+0100], To linux-arm-kernel@lists.infradead.org wrote:
> Mostly unwind is done with irqs enabled however SLUB may call it with
> irqs disabled while creating a new SLUB cache.
> 
> I had system freeze while loading a module which called
> kmem_cache_create() on init. That means SLUB's __slab_alloc() disabled
> interrupts and then
> 
> ->new_slab_objects()
>  ->new_slab()
>   ->setup_object()
>    ->setup_object_debug()
>     ->init_tracking()
>      ->set_track()
>       ->save_stack_trace()
>        ->save_stack_trace_tsk()
>         ->walk_stackframe()
>          ->unwind_frame()
>           ->unwind_find_idx()
>            =>spin_lock_irqsave(&unwind_lock);
> 
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

ping

Sebastian

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/3] arm: use a raw_spinlock_t in unwind
  2019-02-13  8:46   ` Sebastian Andrzej Siewior
@ 2019-02-13 15:14     ` Arnd Bergmann
  2019-02-13 15:57       ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 19+ messages in thread
From: Arnd Bergmann @ 2019-02-13 15:14 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior; +Cc: Thomas Gleixner, Russell King, Linux ARM

On Wed, Feb 13, 2019 at 9:46 AM Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:
>
> On 2018-12-07 11:27:49 [+0100], To linux-arm-kernel@lists.infradead.org wrote:
> > Mostly unwind is done with irqs enabled however SLUB may call it with
> > irqs disabled while creating a new SLUB cache.
> >
> > I had system freeze while loading a module which called
> > kmem_cache_create() on init. That means SLUB's __slab_alloc() disabled
> > interrupts and then
> >
> > ->new_slab_objects()
> >  ->new_slab()
> >   ->setup_object()
> >    ->setup_object_debug()
> >     ->init_tracking()
> >      ->set_track()
> >       ->save_stack_trace()
> >        ->save_stack_trace_tsk()
> >         ->walk_stackframe()
> >          ->unwind_frame()
> >           ->unwind_find_idx()
> >            =>spin_lock_irqsave(&unwind_lock);
> >
> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>
> ping

Patches 2/3 and 3/3 look good to me. Can you add them to
https://www.arm.linux.org.uk/developer/patches/ ?

Reviewed-by: Arnd Bergmann <arnd@arndb.de>

Are there any other ARM specific RT patches that are not yet picked up?

        Arnd

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/3] arm: use a raw_spinlock_t in unwind
  2019-02-13 15:14     ` Arnd Bergmann
@ 2019-02-13 15:57       ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2019-02-13 15:57 UTC (permalink / raw)
  To: Arnd Bergmann; +Cc: Thomas Gleixner, Russell King, Linux ARM

On 2019-02-13 16:14:18 [+0100], Arnd Bergmann wrote:
> Patches 2/3 and 3/3 look good to me. Can you add them to
> https://www.arm.linux.org.uk/developer/patches/ ?
> 
> Reviewed-by: Arnd Bergmann <arnd@arndb.de>

Will do, thanks.

> Are there any other ARM specific RT patches that are not yet picked up?

No, not at the moment.

>         Arnd

Sebastian

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2019-02-13 15:57 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-07 10:27 [PATCH 0/3 REPOST] arm: covert a few spinlock_t locks to raw_spinlock_t Sebastian Andrzej Siewior
2018-12-07 10:27 ` [PATCH 1/3] arm: Convert arm boot_lock to raw Sebastian Andrzej Siewior
2018-12-07 12:49   ` Linus Walleij
2018-12-07 13:00   ` Russell King - ARM Linux
2018-12-08 23:15     ` Linus Walleij
2018-12-09  0:41       ` Russell King - ARM Linux
2018-12-10 14:37         ` Sebastian Andrzej Siewior
2018-12-11 18:19           ` Linus Walleij
2018-12-11 21:23             ` [PATCH 1/3 v2] arm: versatile: Convert " Sebastian Andrzej Siewior
2018-12-11 21:29             ` [PATCH 1/3] arm: Convert arm " Sebastian Andrzej Siewior
2018-12-13 11:48               ` Russell King - ARM Linux
2018-12-13 12:42                 ` Sebastian Andrzej Siewior
2018-12-12 11:15             ` Russell King - ARM Linux
2018-12-07 10:27 ` [PATCH 2/3] arm: kprobe: make patch_lock a raw_spinlock_t Sebastian Andrzej Siewior
2019-02-13  8:46   ` Sebastian Andrzej Siewior
2018-12-07 10:27 ` [PATCH 3/3] arm: use a raw_spinlock_t in unwind Sebastian Andrzej Siewior
2019-02-13  8:46   ` Sebastian Andrzej Siewior
2019-02-13 15:14     ` Arnd Bergmann
2019-02-13 15:57       ` Sebastian Andrzej Siewior

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).