All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHv2 0/8] arm_pmu: fix lockdep issues with ACPI systems
@ 2018-02-05 16:41 Mark Rutland
  2018-02-05 16:41 ` [PATCHv2 1/8] ARM: ux500: remove PMU IRQ bouncer Mark Rutland
                   ` (7 more replies)
  0 siblings, 8 replies; 26+ messages in thread
From: Mark Rutland @ 2018-02-05 16:41 UTC (permalink / raw)
  To: linux-arm-kernel

Currently the arm_pmu ACPI code is somewhat dubious. It attempts to
allocate memory and manipulate IRQs in a hotplug callback, which is an
atomic context.

These patches (based on the arm64 for-next/core branch [1]) attempt to
address this by moving work out of hotplug callback, requiring a
reorganisation of the common arm_pmu code.

I've given these a boot-test on a Juno R1 system, both with DT and ACPI.
In either case the PMU works as expected, and lockdep seems happy.

I've pushed the series out to my arm64/acpi-pmu-lockdep branch [2].

Since v1 [3]:
* Kill arm_pmu_platdata (and ux500 IRQ bouncer)
* Make PMU/CPU binding implicit in hotplug notifier
* Explicitly enable/disable SPIs at hogplug
* Add armpmu_alloc_atomic
* Cleanup per Will's comments

Thanks,
Mark.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git/log/?h=for-next/core
[2] https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/acpi-pmu-lockdep
[3] https://lkml.kernel.org/r/20171101141239.45340-1-mark.rutland at arm.com

Mark Rutland (8):
  ARM: ux500: remove PMU IRQ bouncer
  arm_pmu: kill arm_pmu_platdata
  arm_pmu: fold platform helpers into platform code
  arm_pmu: add armpmu_alloc_atomic()
  arm_pmu: acpi: check for mismatched PPIs
  arm_pmu: explicitly enable/disable SPIs at hotplug
  arm_pmu: note IRQs and PMUs per-cpu
  arm_pmu: acpi: request IRQs up-front

 arch/arm/mach-ux500/cpu-db8500.c |  35 ---------
 drivers/perf/arm_pmu.c           | 149 +++++++++++++++++----------------------
 drivers/perf/arm_pmu_acpi.c      |  61 +++++++++++-----
 drivers/perf/arm_pmu_platform.c  |  37 ++++++++--
 include/linux/perf/arm_pmu.h     |  26 ++-----
 5 files changed, 142 insertions(+), 166 deletions(-)

-- 
2.11.0

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCHv2 1/8] ARM: ux500: remove PMU IRQ bouncer
  2018-02-05 16:41 [PATCHv2 0/8] arm_pmu: fix lockdep issues with ACPI systems Mark Rutland
@ 2018-02-05 16:41 ` Mark Rutland
  2018-02-05 19:05   ` Linus Walleij
  2018-02-05 16:41 ` [PATCHv2 2/8] arm_pmu: kill arm_pmu_platdata Mark Rutland
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 26+ messages in thread
From: Mark Rutland @ 2018-02-05 16:41 UTC (permalink / raw)
  To: linux-arm-kernel

The ux500 PMU IRQ bouncer is getting in the way of some fundametnal
changes to the ARM PMU driver, and it's the only special case that
exists today. Let's remove it.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm/mach-ux500/cpu-db8500.c | 35 -----------------------------------
 1 file changed, 35 deletions(-)

diff --git a/arch/arm/mach-ux500/cpu-db8500.c b/arch/arm/mach-ux500/cpu-db8500.c
index 57058ac46f49..7e5d7a083707 100644
--- a/arch/arm/mach-ux500/cpu-db8500.c
+++ b/arch/arm/mach-ux500/cpu-db8500.c
@@ -23,7 +23,6 @@
 #include <linux/of.h>
 #include <linux/of_address.h>
 #include <linux/of_platform.h>
-#include <linux/perf/arm_pmu.h>
 #include <linux/regulator/machine.h>
 
 #include <asm/outercache.h>
@@ -112,37 +111,6 @@ static void ux500_restart(enum reboot_mode mode, const char *cmd)
 	prcmu_system_reset(0);
 }
 
-/*
- * The PMU IRQ lines of two cores are wired together into a single interrupt.
- * Bounce the interrupt to the other core if it's not ours.
- */
-static irqreturn_t db8500_pmu_handler(int irq, void *dev, irq_handler_t handler)
-{
-	irqreturn_t ret = handler(irq, dev);
-	int other = !smp_processor_id();
-
-	if (ret == IRQ_NONE && cpu_online(other))
-		irq_set_affinity(irq, cpumask_of(other));
-
-	/*
-	 * We should be able to get away with the amount of IRQ_NONEs we give,
-	 * while still having the spurious IRQ detection code kick in if the
-	 * interrupt really starts hitting spuriously.
-	 */
-	return ret;
-}
-
-static struct arm_pmu_platdata db8500_pmu_platdata = {
-	.handle_irq		= db8500_pmu_handler,
-	.irq_flags		= IRQF_NOBALANCING | IRQF_NO_THREAD,
-};
-
-static struct of_dev_auxdata u8500_auxdata_lookup[] __initdata = {
-	/* Requires call-back bindings. */
-	OF_DEV_AUXDATA("arm,cortex-a9-pmu", 0, "arm-pmu", &db8500_pmu_platdata),
-	{},
-};
-
 static struct of_dev_auxdata u8540_auxdata_lookup[] __initdata = {
 	OF_DEV_AUXDATA("stericsson,db8500-prcmu", 0x80157000, "db8500-prcmu", NULL),
 	{},
@@ -165,9 +133,6 @@ static void __init u8500_init_machine(void)
 	if (of_machine_is_compatible("st-ericsson,u8540"))
 		of_platform_populate(NULL, u8500_local_bus_nodes,
 				     u8540_auxdata_lookup, NULL);
-	else
-		of_platform_populate(NULL, u8500_local_bus_nodes,
-				     u8500_auxdata_lookup, NULL);
 }
 
 static const char * stericsson_dt_platform_compat[] = {
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 2/8] arm_pmu: kill arm_pmu_platdata
  2018-02-05 16:41 [PATCHv2 0/8] arm_pmu: fix lockdep issues with ACPI systems Mark Rutland
  2018-02-05 16:41 ` [PATCHv2 1/8] ARM: ux500: remove PMU IRQ bouncer Mark Rutland
@ 2018-02-05 16:41 ` Mark Rutland
  2018-02-05 16:41 ` [PATCHv2 3/8] arm_pmu: fold platform helpers into platform code Mark Rutland
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2018-02-05 16:41 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we have no platforms passing platform data to the arm_pmu code,
we can get rid of the platdata and associated hooks, paving the way for
rework of our IRQ handling.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_pmu.c       | 27 ++++-----------------------
 include/linux/perf/arm_pmu.h | 17 -----------------
 2 files changed, 4 insertions(+), 40 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 7bc5eee96b31..82b09d1cb42c 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -17,7 +17,6 @@
 #include <linux/export.h>
 #include <linux/kernel.h>
 #include <linux/perf/arm_pmu.h>
-#include <linux/platform_device.h>
 #include <linux/slab.h>
 #include <linux/sched/clock.h>
 #include <linux/spinlock.h>
@@ -320,17 +319,9 @@ validate_group(struct perf_event *event)
 	return 0;
 }
 
-static struct arm_pmu_platdata *armpmu_get_platdata(struct arm_pmu *armpmu)
-{
-	struct platform_device *pdev = armpmu->plat_device;
-
-	return pdev ? dev_get_platdata(&pdev->dev) : NULL;
-}
-
 static irqreturn_t armpmu_dispatch_irq(int irq, void *dev)
 {
 	struct arm_pmu *armpmu;
-	struct arm_pmu_platdata *plat;
 	int ret;
 	u64 start_clock, finish_clock;
 
@@ -342,13 +333,8 @@ static irqreturn_t armpmu_dispatch_irq(int irq, void *dev)
 	 */
 	armpmu = *(void **)dev;
 
-	plat = armpmu_get_platdata(armpmu);
-
 	start_clock = sched_clock();
-	if (plat && plat->handle_irq)
-		ret = plat->handle_irq(irq, armpmu, armpmu->handle_irq);
-	else
-		ret = armpmu->handle_irq(irq, armpmu);
+	ret = armpmu->handle_irq(irq, armpmu);
 	finish_clock = sched_clock();
 
 	perf_sample_event_took(finish_clock - start_clock);
@@ -578,7 +564,6 @@ int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
 			goto err_out;
 		}
 	} else {
-		struct arm_pmu_platdata *platdata = armpmu_get_platdata(armpmu);
 		unsigned long irq_flags;
 
 		err = irq_force_affinity(irq, cpumask_of(cpu));
@@ -589,13 +574,9 @@ int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
 			goto err_out;
 		}
 
-		if (platdata && platdata->irq_flags) {
-			irq_flags = platdata->irq_flags;
-		} else {
-			irq_flags = IRQF_PERCPU |
-				    IRQF_NOBALANCING |
-				    IRQF_NO_THREAD;
-		}
+		irq_flags = IRQF_PERCPU |
+			    IRQF_NOBALANCING |
+			    IRQF_NO_THREAD;
 
 		err = request_irq(irq, handler, irq_flags, "arm-pmu",
 				  per_cpu_ptr(&hw_events->percpu_pmu, cpu));
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index af0f44effd44..712764b35c6a 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -17,23 +17,6 @@
 #include <linux/sysfs.h>
 #include <asm/cputype.h>
 
-/*
- * struct arm_pmu_platdata - ARM PMU platform data
- *
- * @handle_irq: an optional handler which will be called from the
- *	interrupt and passed the address of the low level handler,
- *	and can be used to implement any platform specific handling
- *	before or after calling it.
- *
- * @irq_flags: if non-zero, these flags will be passed to request_irq
- *             when requesting interrupts for this PMU device.
- */
-struct arm_pmu_platdata {
-	irqreturn_t (*handle_irq)(int irq, void *dev,
-				  irq_handler_t pmu_handler);
-	unsigned long irq_flags;
-};
-
 #ifdef CONFIG_ARM_PMU
 
 /*
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 3/8] arm_pmu: fold platform helpers into platform code
  2018-02-05 16:41 [PATCHv2 0/8] arm_pmu: fix lockdep issues with ACPI systems Mark Rutland
  2018-02-05 16:41 ` [PATCHv2 1/8] ARM: ux500: remove PMU IRQ bouncer Mark Rutland
  2018-02-05 16:41 ` [PATCHv2 2/8] arm_pmu: kill arm_pmu_platdata Mark Rutland
@ 2018-02-05 16:41 ` Mark Rutland
  2018-02-05 16:41 ` [PATCHv2 4/8] arm_pmu: add armpmu_alloc_atomic() Mark Rutland
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2018-02-05 16:41 UTC (permalink / raw)
  To: linux-arm-kernel

The armpmu_{request,free}_irqs() helpers are only used by
arm_pmu_platform.c, so let's fold them in and make them static.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_pmu.c          | 21 ---------------------
 drivers/perf/arm_pmu_platform.c | 21 +++++++++++++++++++++
 include/linux/perf/arm_pmu.h    |  2 --
 3 files changed, 21 insertions(+), 23 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 82b09d1cb42c..373dfd7d8a1d 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -534,14 +534,6 @@ void armpmu_free_irq(struct arm_pmu *armpmu, int cpu)
 	free_irq(irq, per_cpu_ptr(&hw_events->percpu_pmu, cpu));
 }
 
-void armpmu_free_irqs(struct arm_pmu *armpmu)
-{
-	int cpu;
-
-	for_each_cpu(cpu, &armpmu->supported_cpus)
-		armpmu_free_irq(armpmu, cpu);
-}
-
 int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
 {
 	int err = 0;
@@ -593,19 +585,6 @@ int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
 	return err;
 }
 
-int armpmu_request_irqs(struct arm_pmu *armpmu)
-{
-	int cpu, err;
-
-	for_each_cpu(cpu, &armpmu->supported_cpus) {
-		err = armpmu_request_irq(armpmu, cpu);
-		if (err)
-			break;
-	}
-
-	return err;
-}
-
 static int armpmu_get_cpu_irq(struct arm_pmu *pmu, int cpu)
 {
 	struct pmu_hw_events __percpu *hw_events = pmu->hw_events;
diff --git a/drivers/perf/arm_pmu_platform.c b/drivers/perf/arm_pmu_platform.c
index 46501cc79fd7..244558cfdbce 100644
--- a/drivers/perf/arm_pmu_platform.c
+++ b/drivers/perf/arm_pmu_platform.c
@@ -164,6 +164,27 @@ static int pmu_parse_irqs(struct arm_pmu *pmu)
 	return 0;
 }
 
+static int armpmu_request_irqs(struct arm_pmu *armpmu)
+{
+	int cpu, err;
+
+	for_each_cpu(cpu, &armpmu->supported_cpus) {
+		err = armpmu_request_irq(armpmu, cpu);
+		if (err)
+			break;
+	}
+
+	return err;
+}
+
+static void armpmu_free_irqs(struct arm_pmu *armpmu)
+{
+	int cpu;
+
+	for_each_cpu(cpu, &armpmu->supported_cpus)
+		armpmu_free_irq(armpmu, cpu);
+}
+
 int arm_pmu_device_probe(struct platform_device *pdev,
 			 const struct of_device_id *of_table,
 			 const struct pmu_probe_info *probe_table)
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 712764b35c6a..899bc7ef0881 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -159,8 +159,6 @@ static inline int arm_pmu_acpi_probe(armpmu_init_fn init_fn) { return 0; }
 struct arm_pmu *armpmu_alloc(void);
 void armpmu_free(struct arm_pmu *pmu);
 int armpmu_register(struct arm_pmu *pmu);
-int armpmu_request_irqs(struct arm_pmu *armpmu);
-void armpmu_free_irqs(struct arm_pmu *armpmu);
 int armpmu_request_irq(struct arm_pmu *armpmu, int cpu);
 void armpmu_free_irq(struct arm_pmu *armpmu, int cpu);
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 4/8] arm_pmu: add armpmu_alloc_atomic()
  2018-02-05 16:41 [PATCHv2 0/8] arm_pmu: fix lockdep issues with ACPI systems Mark Rutland
                   ` (2 preceding siblings ...)
  2018-02-05 16:41 ` [PATCHv2 3/8] arm_pmu: fold platform helpers into platform code Mark Rutland
@ 2018-02-05 16:41 ` Mark Rutland
  2018-02-05 16:41 ` [PATCHv2 5/8] arm_pmu: acpi: check for mismatched PPIs Mark Rutland
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2018-02-05 16:41 UTC (permalink / raw)
  To: linux-arm-kernel

In ACPI systems, we don't know the makeup of CPUs until we hotplug them
on, and thus have to allocate the PMU datastructures at hotplug time.
Thus, we must use GFP_ATOMIC allocations.

Let's add an armpmu_alloc_atomic() that we can use in this case.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_pmu.c       | 17 ++++++++++++++---
 drivers/perf/arm_pmu_acpi.c  |  2 +-
 include/linux/perf/arm_pmu.h |  1 +
 3 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 373dfd7d8a1d..4f73c5e8d623 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -760,18 +760,18 @@ static void cpu_pmu_destroy(struct arm_pmu *cpu_pmu)
 					    &cpu_pmu->node);
 }
 
-struct arm_pmu *armpmu_alloc(void)
+static struct arm_pmu *__armpmu_alloc(gfp_t flags)
 {
 	struct arm_pmu *pmu;
 	int cpu;
 
-	pmu = kzalloc(sizeof(*pmu), GFP_KERNEL);
+	pmu = kzalloc(sizeof(*pmu), flags);
 	if (!pmu) {
 		pr_info("failed to allocate PMU device!\n");
 		goto out;
 	}
 
-	pmu->hw_events = alloc_percpu(struct pmu_hw_events);
+	pmu->hw_events = alloc_percpu_gfp(struct pmu_hw_events, flags);
 	if (!pmu->hw_events) {
 		pr_info("failed to allocate per-cpu PMU data.\n");
 		goto out_free_pmu;
@@ -817,6 +817,17 @@ struct arm_pmu *armpmu_alloc(void)
 	return NULL;
 }
 
+struct arm_pmu *armpmu_alloc(void)
+{
+	return __armpmu_alloc(GFP_KERNEL);
+}
+
+struct arm_pmu *armpmu_alloc_atomic(void)
+{
+	return __armpmu_alloc(GFP_ATOMIC);
+}
+
+
 void armpmu_free(struct arm_pmu *pmu)
 {
 	free_percpu(pmu->hw_events);
diff --git a/drivers/perf/arm_pmu_acpi.c b/drivers/perf/arm_pmu_acpi.c
index 705f1a390e31..30c5f2bbce59 100644
--- a/drivers/perf/arm_pmu_acpi.c
+++ b/drivers/perf/arm_pmu_acpi.c
@@ -127,7 +127,7 @@ static struct arm_pmu *arm_pmu_acpi_find_alloc_pmu(void)
 		return pmu;
 	}
 
-	pmu = armpmu_alloc();
+	pmu = armpmu_alloc_atomic();
 	if (!pmu) {
 		pr_warn("Unable to allocate PMU for CPU%d\n",
 			smp_processor_id());
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 899bc7ef0881..1f8bb83ef42f 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -157,6 +157,7 @@ static inline int arm_pmu_acpi_probe(armpmu_init_fn init_fn) { return 0; }
 
 /* Internal functions only for core arm_pmu code */
 struct arm_pmu *armpmu_alloc(void);
+struct arm_pmu *armpmu_alloc_atomic(void);
 void armpmu_free(struct arm_pmu *pmu);
 int armpmu_register(struct arm_pmu *pmu);
 int armpmu_request_irq(struct arm_pmu *armpmu, int cpu);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 5/8] arm_pmu: acpi: check for mismatched PPIs
  2018-02-05 16:41 [PATCHv2 0/8] arm_pmu: fix lockdep issues with ACPI systems Mark Rutland
                   ` (3 preceding siblings ...)
  2018-02-05 16:41 ` [PATCHv2 4/8] arm_pmu: add armpmu_alloc_atomic() Mark Rutland
@ 2018-02-05 16:41 ` Mark Rutland
  2018-02-05 16:42 ` [PATCHv2 6/8] arm_pmu: explicitly enable/disable SPIs at hotplug Mark Rutland
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2018-02-05 16:41 UTC (permalink / raw)
  To: linux-arm-kernel

The arm_pmu platform code explicitly checks for mismatched PPIs at probe
time, while the ACPI code leaves this to the core code. Future
refactoring will make this difficult for the core code to check, so
let's have the ACPI code check this explicitly.

As before, upon a failure we'll continue on without an interrupt. Ho
hum.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_pmu.c          | 17 ++++-------------
 drivers/perf/arm_pmu_acpi.c     | 42 +++++++++++++++++++++++++++++++++++++----
 drivers/perf/arm_pmu_platform.c |  7 -------
 3 files changed, 42 insertions(+), 24 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 4f73c5e8d623..ddcabd6a5d52 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -543,19 +543,7 @@ int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
 	if (!irq)
 		return 0;
 
-	if (irq_is_percpu_devid(irq) && cpumask_empty(&armpmu->active_irqs)) {
-		err = request_percpu_irq(irq, handler, "arm-pmu",
-					 &hw_events->percpu_pmu);
-	} else if (irq_is_percpu_devid(irq)) {
-		int other_cpu = cpumask_first(&armpmu->active_irqs);
-		int other_irq = per_cpu(hw_events->irq, other_cpu);
-
-		if (irq != other_irq) {
-			pr_warn("mismatched PPIs detected.\n");
-			err = -EINVAL;
-			goto err_out;
-		}
-	} else {
+	if (!irq_is_percpu_devid(irq)) {
 		unsigned long irq_flags;
 
 		err = irq_force_affinity(irq, cpumask_of(cpu));
@@ -572,6 +560,9 @@ int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
 
 		err = request_irq(irq, handler, irq_flags, "arm-pmu",
 				  per_cpu_ptr(&hw_events->percpu_pmu, cpu));
+	} else if (cpumask_empty(&armpmu->active_irqs)) {
+		err = request_percpu_irq(irq, handler, "arm-pmu",
+					 &hw_events->percpu_pmu);
 	}
 
 	if (err)
diff --git a/drivers/perf/arm_pmu_acpi.c b/drivers/perf/arm_pmu_acpi.c
index 30c5f2bbce59..09a1a36cff57 100644
--- a/drivers/perf/arm_pmu_acpi.c
+++ b/drivers/perf/arm_pmu_acpi.c
@@ -11,6 +11,8 @@
 #include <linux/acpi.h>
 #include <linux/cpumask.h>
 #include <linux/init.h>
+#include <linux/irq.h>
+#include <linux/irqdesc.h>
 #include <linux/percpu.h>
 #include <linux/perf/arm_pmu.h>
 
@@ -140,6 +142,35 @@ static struct arm_pmu *arm_pmu_acpi_find_alloc_pmu(void)
 }
 
 /*
+ * Check whether the new IRQ is compatible with those already associated with
+ * the PMU (e.g. we don't have mismatched PPIs).
+ */
+static bool pmu_irq_matches(struct arm_pmu *pmu, int irq)
+{
+	struct pmu_hw_events __percpu *hw_events = pmu->hw_events;
+	int cpu;
+
+	if (!irq)
+		return true;
+
+	for_each_cpu(cpu, &pmu->supported_cpus) {
+		int other_irq = per_cpu(hw_events->irq, cpu);
+		if (!other_irq)
+			continue;
+
+		if (irq == other_irq)
+			continue;
+		if (!irq_is_percpu_devid(irq) && !irq_is_percpu_devid(other_irq))
+			continue;
+
+		pr_warn("mismatched PPIs detected\n");
+		return false;
+	}
+
+	return true;
+}
+
+/*
  * This must run before the common arm_pmu hotplug logic, so that we can
  * associate a CPU and its interrupt before the common code tries to manage the
  * affinity and so on.
@@ -164,18 +195,21 @@ static int arm_pmu_acpi_cpu_starting(unsigned int cpu)
 	if (!pmu)
 		return -ENOMEM;
 
-	cpumask_set_cpu(cpu, &pmu->supported_cpus);
-
 	per_cpu(probed_pmus, cpu) = pmu;
 
+	if (pmu_irq_matches(pmu, irq)) {
+		hw_events = pmu->hw_events;
+		per_cpu(hw_events->irq, cpu) = irq;
+	}
+
+	cpumask_set_cpu(cpu, &pmu->supported_cpus);
+
 	/*
 	 * Log and request the IRQ so the core arm_pmu code can manage it.  In
 	 * some situations (e.g. mismatched PPIs), we may fail to request the
 	 * IRQ. However, it may be too late for us to do anything about it.
 	 * The common ARM PMU code will log a warning in this case.
 	 */
-	hw_events = pmu->hw_events;
-	per_cpu(hw_events->irq, cpu) = irq;
 	armpmu_request_irq(pmu, cpu);
 
 	/*
diff --git a/drivers/perf/arm_pmu_platform.c b/drivers/perf/arm_pmu_platform.c
index 244558cfdbce..1dc3c1f574e0 100644
--- a/drivers/perf/arm_pmu_platform.c
+++ b/drivers/perf/arm_pmu_platform.c
@@ -127,13 +127,6 @@ static int pmu_parse_irqs(struct arm_pmu *pmu)
 			pdev->dev.of_node);
 	}
 
-	/*
-	 * Some platforms have all PMU IRQs OR'd into a single IRQ, with a
-	 * special platdata function that attempts to demux them.
-	 */
-	if (dev_get_platdata(&pdev->dev))
-		cpumask_setall(&pmu->supported_cpus);
-
 	for (i = 0; i < num_irqs; i++) {
 		int cpu, irq;
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 6/8] arm_pmu: explicitly enable/disable SPIs at hotplug
  2018-02-05 16:41 [PATCHv2 0/8] arm_pmu: fix lockdep issues with ACPI systems Mark Rutland
                   ` (4 preceding siblings ...)
  2018-02-05 16:41 ` [PATCHv2 5/8] arm_pmu: acpi: check for mismatched PPIs Mark Rutland
@ 2018-02-05 16:42 ` Mark Rutland
  2018-02-26 15:16     ` Geert Uytterhoeven
  2018-02-05 16:42 ` [PATCHv2 7/8] arm_pmu: note IRQs and PMUs per-cpu Mark Rutland
  2018-02-05 16:42 ` [PATCHv2 8/8] arm_pmu: acpi: request IRQs up-front Mark Rutland
  7 siblings, 1 reply; 26+ messages in thread
From: Mark Rutland @ 2018-02-05 16:42 UTC (permalink / raw)
  To: linux-arm-kernel

To support ACPI systems, we need to request IRQs before CPUs are
hotplugged, and thus we need to request IRQs before we know their
associated PMU.

This is problematic if a PMU IRQ is pending out of reset, as it may be
taken before we know the PMU, and thus the IRQ handler won't be able to
handle it, leaving it screaming.

To avoid such problems, lets request all IRQs in a disabled state, and
explicitly enable/disable them at hotplug time, when we're sure the PMU
has been probed.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_pmu.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index ddcabd6a5d52..72118e6f9122 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -558,6 +558,7 @@ int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
 			    IRQF_NOBALANCING |
 			    IRQF_NO_THREAD;
 
+		irq_set_status_flags(irq, IRQ_NOAUTOEN);
 		err = request_irq(irq, handler, irq_flags, "arm-pmu",
 				  per_cpu_ptr(&hw_events->percpu_pmu, cpu));
 	} else if (cpumask_empty(&armpmu->active_irqs)) {
@@ -600,10 +601,10 @@ static int arm_perf_starting_cpu(unsigned int cpu, struct hlist_node *node)
 
 	irq = armpmu_get_cpu_irq(pmu, cpu);
 	if (irq) {
-		if (irq_is_percpu_devid(irq)) {
+		if (irq_is_percpu_devid(irq))
 			enable_percpu_irq(irq, IRQ_TYPE_NONE);
-			return 0;
-		}
+		else
+			enable_irq(irq);
 	}
 
 	return 0;
@@ -618,8 +619,12 @@ static int arm_perf_teardown_cpu(unsigned int cpu, struct hlist_node *node)
 		return 0;
 
 	irq = armpmu_get_cpu_irq(pmu, cpu);
-	if (irq && irq_is_percpu_devid(irq))
-		disable_percpu_irq(irq);
+	if (irq) {
+		if (irq_is_percpu_devid(irq))
+			disable_percpu_irq(irq);
+		else
+			disable_irq(irq);
+	}
 
 	return 0;
 }
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 7/8] arm_pmu: note IRQs and PMUs per-cpu
  2018-02-05 16:41 [PATCHv2 0/8] arm_pmu: fix lockdep issues with ACPI systems Mark Rutland
                   ` (5 preceding siblings ...)
  2018-02-05 16:42 ` [PATCHv2 6/8] arm_pmu: explicitly enable/disable SPIs at hotplug Mark Rutland
@ 2018-02-05 16:42 ` Mark Rutland
  2018-02-05 17:07   ` Robin Murphy
  2018-02-14 13:11   ` Will Deacon
  2018-02-05 16:42 ` [PATCHv2 8/8] arm_pmu: acpi: request IRQs up-front Mark Rutland
  7 siblings, 2 replies; 26+ messages in thread
From: Mark Rutland @ 2018-02-05 16:42 UTC (permalink / raw)
  To: linux-arm-kernel

To support ACPI systems, we need to request IRQs before we know the
associated PMU, and thus we need some percpu variable that the IRQ
handler can find the PMU from.

As we're going to request IRQs without the PMU, we can't rely on the
arm_pmu::active_irqs mask, and similarly need to track requested IRQs
with a percpu variable.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_pmu.c       | 76 +++++++++++++++++++++++++++++++-------------
 include/linux/perf/arm_pmu.h |  1 -
 2 files changed, 54 insertions(+), 23 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 72118e6f9122..023a8ebdace6 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -25,6 +25,9 @@
 
 #include <asm/irq_regs.h>
 
+static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
+static DEFINE_PER_CPU(int, cpu_irq);
+
 static int
 armpmu_map_cache_event(const unsigned (*cache_map)
 				      [PERF_COUNT_HW_CACHE_MAX]
@@ -325,13 +328,9 @@ static irqreturn_t armpmu_dispatch_irq(int irq, void *dev)
 	int ret;
 	u64 start_clock, finish_clock;
 
-	/*
-	 * we request the IRQ with a (possibly percpu) struct arm_pmu**, but
-	 * the handlers expect a struct arm_pmu*. The percpu_irq framework will
-	 * do any necessary shifting, we just need to perform the first
-	 * dereference.
-	 */
-	armpmu = *(void **)dev;
+	armpmu = this_cpu_read(cpu_armpmu);
+	if (WARN_ON_ONCE(!armpmu))
+		return IRQ_NONE;
 
 	start_clock = sched_clock();
 	ret = armpmu->handle_irq(irq, armpmu);
@@ -517,29 +516,47 @@ int perf_num_counters(void)
 }
 EXPORT_SYMBOL_GPL(perf_num_counters);
 
-void armpmu_free_irq(struct arm_pmu *armpmu, int cpu)
+int armpmu_count_irq_users(const int irq)
 {
-	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
-	int irq = per_cpu(hw_events->irq, cpu);
+	int cpu, count = 0;
+
+	for_each_possible_cpu(cpu) {
+		if (per_cpu(cpu_irq, cpu) == irq)
+			count++;
+	}
 
-	if (!cpumask_test_and_clear_cpu(cpu, &armpmu->active_irqs))
+	return count;
+}
+
+void armpmu_free_cpu_irq(int irq, int cpu)
+{
+	if (per_cpu(cpu_irq, cpu) == 0)
+		return;
+	if (WARN_ON(irq != per_cpu(cpu_irq, cpu)))
 		return;
 
 	if (irq_is_percpu_devid(irq)) {
-		free_percpu_irq(irq, &hw_events->percpu_pmu);
-		cpumask_clear(&armpmu->active_irqs);
-		return;
+		if (armpmu_count_irq_users(irq) == 1)
+			free_percpu_irq(irq, &cpu_armpmu);
+	} else {
+		free_irq(irq, NULL);
 	}
 
-	free_irq(irq, per_cpu_ptr(&hw_events->percpu_pmu, cpu));
+	per_cpu(cpu_irq, cpu) = 0;
 }
 
-int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
+void armpmu_free_irq(struct arm_pmu *armpmu, int cpu)
 {
-	int err = 0;
 	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
-	const irq_handler_t handler = armpmu_dispatch_irq;
 	int irq = per_cpu(hw_events->irq, cpu);
+
+	armpmu_free_cpu_irq(irq, cpu);
+}
+
+int armpmu_request_cpu_irq(int irq, int cpu)
+{
+	int err = 0;
+	const irq_handler_t handler = armpmu_dispatch_irq;
 	if (!irq)
 		return 0;
 
@@ -560,16 +577,16 @@ int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
 
 		irq_set_status_flags(irq, IRQ_NOAUTOEN);
 		err = request_irq(irq, handler, irq_flags, "arm-pmu",
-				  per_cpu_ptr(&hw_events->percpu_pmu, cpu));
-	} else if (cpumask_empty(&armpmu->active_irqs)) {
+				  NULL);
+	} else if (armpmu_count_irq_users(irq) == 0) {
 		err = request_percpu_irq(irq, handler, "arm-pmu",
-					 &hw_events->percpu_pmu);
+					 cpu_armpmu);
 	}
 
 	if (err)
 		goto err_out;
 
-	cpumask_set_cpu(cpu, &armpmu->active_irqs);
+	per_cpu(cpu_irq, cpu) = irq;
 	return 0;
 
 err_out:
@@ -577,6 +594,17 @@ int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
 	return err;
 }
 
+int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
+{
+	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
+	int irq = per_cpu(hw_events->irq, cpu);
+	if (!irq)
+		return 0;
+
+
+	return armpmu_request_cpu_irq(irq, cpu);
+}
+
 static int armpmu_get_cpu_irq(struct arm_pmu *pmu, int cpu)
 {
 	struct pmu_hw_events __percpu *hw_events = pmu->hw_events;
@@ -599,6 +627,8 @@ static int arm_perf_starting_cpu(unsigned int cpu, struct hlist_node *node)
 	if (pmu->reset)
 		pmu->reset(pmu);
 
+	per_cpu(cpu_armpmu, cpu) = pmu;
+
 	irq = armpmu_get_cpu_irq(pmu, cpu);
 	if (irq) {
 		if (irq_is_percpu_devid(irq))
@@ -626,6 +656,8 @@ static int arm_perf_teardown_cpu(unsigned int cpu, struct hlist_node *node)
 			disable_irq(irq);
 	}
 
+	per_cpu(cpu_armpmu, cpu) = NULL;
+
 	return 0;
 }
 
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 1f8bb83ef42f..feec9e7e85db 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -75,7 +75,6 @@ enum armpmu_attr_groups {
 
 struct arm_pmu {
 	struct pmu	pmu;
-	cpumask_t	active_irqs;
 	cpumask_t	supported_cpus;
 	char		*name;
 	irqreturn_t	(*handle_irq)(int irq_num, void *dev);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 8/8] arm_pmu: acpi: request IRQs up-front
  2018-02-05 16:41 [PATCHv2 0/8] arm_pmu: fix lockdep issues with ACPI systems Mark Rutland
                   ` (6 preceding siblings ...)
  2018-02-05 16:42 ` [PATCHv2 7/8] arm_pmu: note IRQs and PMUs per-cpu Mark Rutland
@ 2018-02-05 16:42 ` Mark Rutland
  7 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2018-02-05 16:42 UTC (permalink / raw)
  To: linux-arm-kernel

We can't request IRQs in atomic context, so for ACPI systems we'll have
to request them up-front, and later associate them with CPUs.

This patch reorganises the arm_pmu code to do so. As we no longer have
the arm_pmu structure at probe time, a number of prototypes need to be
adjusted, requiring changes to the common arm_pmu code and arm_pmu
platform code.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/perf/arm_pmu.c          | 26 +++-----------------------
 drivers/perf/arm_pmu_acpi.c     | 19 ++++++-------------
 drivers/perf/arm_pmu_platform.c | 15 ++++++++++++---
 include/linux/perf/arm_pmu.h    |  5 +++--
 4 files changed, 24 insertions(+), 41 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 023a8ebdace6..1bff95ff3566 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -528,7 +528,7 @@ int armpmu_count_irq_users(const int irq)
 	return count;
 }
 
-void armpmu_free_cpu_irq(int irq, int cpu)
+void armpmu_free_irq(int irq, int cpu)
 {
 	if (per_cpu(cpu_irq, cpu) == 0)
 		return;
@@ -545,15 +545,7 @@ void armpmu_free_cpu_irq(int irq, int cpu)
 	per_cpu(cpu_irq, cpu) = 0;
 }
 
-void armpmu_free_irq(struct arm_pmu *armpmu, int cpu)
-{
-	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
-	int irq = per_cpu(hw_events->irq, cpu);
-
-	armpmu_free_cpu_irq(irq, cpu);
-}
-
-int armpmu_request_cpu_irq(int irq, int cpu)
+int armpmu_request_irq(int irq, int cpu)
 {
 	int err = 0;
 	const irq_handler_t handler = armpmu_dispatch_irq;
@@ -576,8 +568,7 @@ int armpmu_request_cpu_irq(int irq, int cpu)
 			    IRQF_NO_THREAD;
 
 		irq_set_status_flags(irq, IRQ_NOAUTOEN);
-		err = request_irq(irq, handler, irq_flags, "arm-pmu",
-				  NULL);
+		err = request_irq(irq, handler, irq_flags, "arm-pmu", NULL);
 	} else if (armpmu_count_irq_users(irq) == 0) {
 		err = request_percpu_irq(irq, handler, "arm-pmu",
 					 cpu_armpmu);
@@ -594,17 +585,6 @@ int armpmu_request_cpu_irq(int irq, int cpu)
 	return err;
 }
 
-int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
-{
-	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
-	int irq = per_cpu(hw_events->irq, cpu);
-	if (!irq)
-		return 0;
-
-
-	return armpmu_request_cpu_irq(irq, cpu);
-}
-
 static int armpmu_get_cpu_irq(struct arm_pmu *pmu, int cpu)
 {
 	struct pmu_hw_events __percpu *hw_events = pmu->hw_events;
diff --git a/drivers/perf/arm_pmu_acpi.c b/drivers/perf/arm_pmu_acpi.c
index 09a1a36cff57..0f197516d708 100644
--- a/drivers/perf/arm_pmu_acpi.c
+++ b/drivers/perf/arm_pmu_acpi.c
@@ -89,7 +89,13 @@ static int arm_pmu_acpi_parse_irqs(void)
 			pr_warn("No ACPI PMU IRQ for CPU%d\n", cpu);
 		}
 
+		/*
+		 * Log and request the IRQ so the core arm_pmu code can manage
+		 * it. We'll have to sanity-check IRQs later when we associate
+		 * them with their PMUs.
+		 */
 		per_cpu(pmu_irqs, cpu) = irq;
+		armpmu_request_irq(irq, cpu);
 	}
 
 	return 0;
@@ -205,14 +211,6 @@ static int arm_pmu_acpi_cpu_starting(unsigned int cpu)
 	cpumask_set_cpu(cpu, &pmu->supported_cpus);
 
 	/*
-	 * Log and request the IRQ so the core arm_pmu code can manage it.  In
-	 * some situations (e.g. mismatched PPIs), we may fail to request the
-	 * IRQ. However, it may be too late for us to do anything about it.
-	 * The common ARM PMU code will log a warning in this case.
-	 */
-	armpmu_request_irq(pmu, cpu);
-
-	/*
 	 * Ideally, we'd probe the PMU here when we find the first matching
 	 * CPU. We can't do that for several reasons; see the comment in
 	 * arm_pmu_acpi_init().
@@ -281,11 +279,6 @@ static int arm_pmu_acpi_init(void)
 	if (acpi_disabled)
 		return 0;
 
-	/*
-	 * We can't request IRQs yet, since we don't know the cookie value
-	 * until we know which CPUs share the same logical PMU. We'll handle
-	 * that in arm_pmu_acpi_cpu_starting().
-	 */
 	ret = arm_pmu_acpi_parse_irqs();
 	if (ret)
 		return ret;
diff --git a/drivers/perf/arm_pmu_platform.c b/drivers/perf/arm_pmu_platform.c
index 1dc3c1f574e0..7729eda5909d 100644
--- a/drivers/perf/arm_pmu_platform.c
+++ b/drivers/perf/arm_pmu_platform.c
@@ -159,10 +159,15 @@ static int pmu_parse_irqs(struct arm_pmu *pmu)
 
 static int armpmu_request_irqs(struct arm_pmu *armpmu)
 {
+	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
 	int cpu, err;
 
 	for_each_cpu(cpu, &armpmu->supported_cpus) {
-		err = armpmu_request_irq(armpmu, cpu);
+		int irq = per_cpu(hw_events->irq, cpu);
+		if (!irq)
+			continue;
+
+		err = armpmu_request_irq(irq, cpu);
 		if (err)
 			break;
 	}
@@ -173,9 +178,13 @@ static int armpmu_request_irqs(struct arm_pmu *armpmu)
 static void armpmu_free_irqs(struct arm_pmu *armpmu)
 {
 	int cpu;
+	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
 
-	for_each_cpu(cpu, &armpmu->supported_cpus)
-		armpmu_free_irq(armpmu, cpu);
+	for_each_cpu(cpu, &armpmu->supported_cpus) {
+		int irq = per_cpu(hw_events->irq, cpu);
+
+		armpmu_free_irq(irq, cpu);
+	}
 }
 
 int arm_pmu_device_probe(struct platform_device *pdev,
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index feec9e7e85db..40036a57d072 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -14,6 +14,7 @@
 
 #include <linux/interrupt.h>
 #include <linux/perf_event.h>
+#include <linux/platform_device.h>
 #include <linux/sysfs.h>
 #include <asm/cputype.h>
 
@@ -159,8 +160,8 @@ struct arm_pmu *armpmu_alloc(void);
 struct arm_pmu *armpmu_alloc_atomic(void);
 void armpmu_free(struct arm_pmu *pmu);
 int armpmu_register(struct arm_pmu *pmu);
-int armpmu_request_irq(struct arm_pmu *armpmu, int cpu);
-void armpmu_free_irq(struct arm_pmu *armpmu, int cpu);
+int armpmu_request_irq(int irq, int cpu);
+void armpmu_free_irq(int irq, int cpu);
 
 #define ARMV8_PMU_PDEV_NAME "armv8-pmu"
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 7/8] arm_pmu: note IRQs and PMUs per-cpu
  2018-02-05 16:42 ` [PATCHv2 7/8] arm_pmu: note IRQs and PMUs per-cpu Mark Rutland
@ 2018-02-05 17:07   ` Robin Murphy
  2018-02-05 17:13     ` Mark Rutland
  2018-02-14 13:11   ` Will Deacon
  1 sibling, 1 reply; 26+ messages in thread
From: Robin Murphy @ 2018-02-05 17:07 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Mark,

Just a couple of nits spotted in passing...

On 05/02/18 16:42, Mark Rutland wrote:
> To support ACPI systems, we need to request IRQs before we know the
> associated PMU, and thus we need some percpu variable that the IRQ
> handler can find the PMU from.
> 
> As we're going to request IRQs without the PMU, we can't rely on the
> arm_pmu::active_irqs mask, and similarly need to track requested IRQs
> with a percpu variable.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>   drivers/perf/arm_pmu.c       | 76 +++++++++++++++++++++++++++++++-------------
>   include/linux/perf/arm_pmu.h |  1 -
>   2 files changed, 54 insertions(+), 23 deletions(-)
> 
> diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
> index 72118e6f9122..023a8ebdace6 100644
> --- a/drivers/perf/arm_pmu.c
> +++ b/drivers/perf/arm_pmu.c
> @@ -25,6 +25,9 @@
>   
>   #include <asm/irq_regs.h>
>   
> +static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
> +static DEFINE_PER_CPU(int, cpu_irq);
> +
>   static int
>   armpmu_map_cache_event(const unsigned (*cache_map)
>   				      [PERF_COUNT_HW_CACHE_MAX]
> @@ -325,13 +328,9 @@ static irqreturn_t armpmu_dispatch_irq(int irq, void *dev)
>   	int ret;
>   	u64 start_clock, finish_clock;
>   
> -	/*
> -	 * we request the IRQ with a (possibly percpu) struct arm_pmu**, but
> -	 * the handlers expect a struct arm_pmu*. The percpu_irq framework will
> -	 * do any necessary shifting, we just need to perform the first
> -	 * dereference.
> -	 */
> -	armpmu = *(void **)dev;
> +	armpmu = this_cpu_read(cpu_armpmu);
> +	if (WARN_ON_ONCE(!armpmu))
> +		return IRQ_NONE;
>   
>   	start_clock = sched_clock();
>   	ret = armpmu->handle_irq(irq, armpmu);
> @@ -517,29 +516,47 @@ int perf_num_counters(void)
>   }
>   EXPORT_SYMBOL_GPL(perf_num_counters);
>   
> -void armpmu_free_irq(struct arm_pmu *armpmu, int cpu)
> +int armpmu_count_irq_users(const int irq)
>   {
> -	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
> -	int irq = per_cpu(hw_events->irq, cpu);
> +	int cpu, count = 0;
> +
> +	for_each_possible_cpu(cpu) {
> +		if (per_cpu(cpu_irq, cpu) == irq)
> +			count++;
> +	}
>   
> -	if (!cpumask_test_and_clear_cpu(cpu, &armpmu->active_irqs))
> +	return count;
> +}
> +
> +void armpmu_free_cpu_irq(int irq, int cpu)
> +{
> +	if (per_cpu(cpu_irq, cpu) == 0)
> +		return;
> +	if (WARN_ON(irq != per_cpu(cpu_irq, cpu)))
>   		return;
>   
>   	if (irq_is_percpu_devid(irq)) {
> -		free_percpu_irq(irq, &hw_events->percpu_pmu);
> -		cpumask_clear(&armpmu->active_irqs);
> -		return;
> +		if (armpmu_count_irq_users(irq) == 1)
> +			free_percpu_irq(irq, &cpu_armpmu);
> +	} else {
> +		free_irq(irq, NULL);
>   	}

Following the same pattern as the request would make this marginally 
simpler (and nicely symmetric), i.e.

	if (!irq_is_percpu_devid(irq))
		free_irq(irq, NULL);
	else if (armpmu_count_irq_users(irq) == 1)
		free_percpu_irq(irq, &cpu_armpmu);

> -	free_irq(irq, per_cpu_ptr(&hw_events->percpu_pmu, cpu));
> +	per_cpu(cpu_irq, cpu) = 0;
>   }
>   
> -int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
> +void armpmu_free_irq(struct arm_pmu *armpmu, int cpu)
>   {
> -	int err = 0;
>   	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
> -	const irq_handler_t handler = armpmu_dispatch_irq;
>   	int irq = per_cpu(hw_events->irq, cpu);
> +
> +	armpmu_free_cpu_irq(irq, cpu);
> +}
> +
> +int armpmu_request_cpu_irq(int irq, int cpu)
> +{
> +	int err = 0;
> +	const irq_handler_t handler = armpmu_dispatch_irq;
>   	if (!irq)
>   		return 0;
>   
> @@ -560,16 +577,16 @@ int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
>   
>   		irq_set_status_flags(irq, IRQ_NOAUTOEN);
>   		err = request_irq(irq, handler, irq_flags, "arm-pmu",
> -				  per_cpu_ptr(&hw_events->percpu_pmu, cpu));
> -	} else if (cpumask_empty(&armpmu->active_irqs)) {
> +				  NULL);
> +	} else if (armpmu_count_irq_users(irq) == 0) {
>   		err = request_percpu_irq(irq, handler, "arm-pmu",
> -					 &hw_events->percpu_pmu);
> +					 cpu_armpmu);
>   	}
>   
>   	if (err)
>   		goto err_out;
>   
> -	cpumask_set_cpu(cpu, &armpmu->active_irqs);
> +	per_cpu(cpu_irq, cpu) = irq;
>   	return 0;
>   
>   err_out:
> @@ -577,6 +594,17 @@ int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
>   	return err;
>   }
>   
> +int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
> +{
> +	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
> +	int irq = per_cpu(hw_events->irq, cpu);
> +	if (!irq)
> +		return 0;
> +
> +

Double blank line here.

Robin.

> +	return armpmu_request_cpu_irq(irq, cpu);
> +}
> +
>   static int armpmu_get_cpu_irq(struct arm_pmu *pmu, int cpu)
>   {
>   	struct pmu_hw_events __percpu *hw_events = pmu->hw_events;
> @@ -599,6 +627,8 @@ static int arm_perf_starting_cpu(unsigned int cpu, struct hlist_node *node)
>   	if (pmu->reset)
>   		pmu->reset(pmu);
>   
> +	per_cpu(cpu_armpmu, cpu) = pmu;
> +
>   	irq = armpmu_get_cpu_irq(pmu, cpu);
>   	if (irq) {
>   		if (irq_is_percpu_devid(irq))
> @@ -626,6 +656,8 @@ static int arm_perf_teardown_cpu(unsigned int cpu, struct hlist_node *node)
>   			disable_irq(irq);
>   	}
>   
> +	per_cpu(cpu_armpmu, cpu) = NULL;
> +
>   	return 0;
>   }
>   
> diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
> index 1f8bb83ef42f..feec9e7e85db 100644
> --- a/include/linux/perf/arm_pmu.h
> +++ b/include/linux/perf/arm_pmu.h
> @@ -75,7 +75,6 @@ enum armpmu_attr_groups {
>   
>   struct arm_pmu {
>   	struct pmu	pmu;
> -	cpumask_t	active_irqs;
>   	cpumask_t	supported_cpus;
>   	char		*name;
>   	irqreturn_t	(*handle_irq)(int irq_num, void *dev);
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCHv2 7/8] arm_pmu: note IRQs and PMUs per-cpu
  2018-02-05 17:07   ` Robin Murphy
@ 2018-02-05 17:13     ` Mark Rutland
  0 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2018-02-05 17:13 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 05, 2018 at 05:07:23PM +0000, Robin Murphy wrote:
> Hi Mark,
> 
> Just a couple of nits spotted in passing...
> 
> On 05/02/18 16:42, Mark Rutland wrote:

> > +void armpmu_free_cpu_irq(int irq, int cpu)
> > +{
> > +	if (per_cpu(cpu_irq, cpu) == 0)
> > +		return;
> > +	if (WARN_ON(irq != per_cpu(cpu_irq, cpu)))
> >   		return;
> >   	if (irq_is_percpu_devid(irq)) {
> > -		free_percpu_irq(irq, &hw_events->percpu_pmu);
> > -		cpumask_clear(&armpmu->active_irqs);
> > -		return;
> > +		if (armpmu_count_irq_users(irq) == 1)
> > +			free_percpu_irq(irq, &cpu_armpmu);
> > +	} else {
> > +		free_irq(irq, NULL);
> >   	}
> 
> Following the same pattern as the request would make this marginally simpler
> (and nicely symmetric), i.e.
> 
> 	if (!irq_is_percpu_devid(irq))
> 		free_irq(irq, NULL);
> 	else if (armpmu_count_irq_users(irq) == 1)
> 		free_percpu_irq(irq, &cpu_armpmu);

Sure; done.

> > +int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
> > +{
> > +	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
> > +	int irq = per_cpu(hw_events->irq, cpu);
> > +	if (!irq)
> > +		return 0;
> > +
> > +
> 
> Double blank line here.

... and gone.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCHv2 1/8] ARM: ux500: remove PMU IRQ bouncer
  2018-02-05 16:41 ` [PATCHv2 1/8] ARM: ux500: remove PMU IRQ bouncer Mark Rutland
@ 2018-02-05 19:05   ` Linus Walleij
  2018-02-06 11:26     ` Mark Rutland
  0 siblings, 1 reply; 26+ messages in thread
From: Linus Walleij @ 2018-02-05 19:05 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 5, 2018 at 5:41 PM, Mark Rutland <mark.rutland@arm.com> wrote:

> The ux500 PMU IRQ bouncer is getting in the way of some fundametnal
> changes to the ARM PMU driver, and it's the only special case that
> exists today. Let's remove it.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Linus Walleij <linus.walleij@linaro.org>
> Cc: Will Deacon <will.deacon@arm.com>

It's OK this silicon design was especially strange it seems.
We are done with the days of performance optimization for
the platform anyways.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCHv2 1/8] ARM: ux500: remove PMU IRQ bouncer
  2018-02-05 19:05   ` Linus Walleij
@ 2018-02-06 11:26     ` Mark Rutland
  0 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2018-02-06 11:26 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 05, 2018 at 08:05:53PM +0100, Linus Walleij wrote:
> On Mon, Feb 5, 2018 at 5:41 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> 
> > The ux500 PMU IRQ bouncer is getting in the way of some fundametnal
> > changes to the ARM PMU driver, and it's the only special case that
> > exists today. Let's remove it.
> >
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Linus Walleij <linus.walleij@linaro.org>
> > Cc: Will Deacon <will.deacon@arm.com>
> 
> It's OK this silicon design was especially strange it seems.
> We are done with the days of performance optimization for
> the platform anyways.
> Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Cheers!

Mark.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCHv2 7/8] arm_pmu: note IRQs and PMUs per-cpu
  2018-02-05 16:42 ` [PATCHv2 7/8] arm_pmu: note IRQs and PMUs per-cpu Mark Rutland
  2018-02-05 17:07   ` Robin Murphy
@ 2018-02-14 13:11   ` Will Deacon
  2018-02-14 13:24     ` Mark Rutland
  1 sibling, 1 reply; 26+ messages in thread
From: Will Deacon @ 2018-02-14 13:11 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 05, 2018 at 04:42:01PM +0000, Mark Rutland wrote:
> diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
> index 72118e6f9122..023a8ebdace6 100644
> --- a/drivers/perf/arm_pmu.c
> +++ b/drivers/perf/arm_pmu.c
> @@ -25,6 +25,9 @@
>  
>  #include <asm/irq_regs.h>
>  
> +static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
> +static DEFINE_PER_CPU(int, cpu_irq);
> +
>  static int
>  armpmu_map_cache_event(const unsigned (*cache_map)
>  				      [PERF_COUNT_HW_CACHE_MAX]
> @@ -325,13 +328,9 @@ static irqreturn_t armpmu_dispatch_irq(int irq, void *dev)
>  	int ret;
>  	u64 start_clock, finish_clock;
>  
> -	/*
> -	 * we request the IRQ with a (possibly percpu) struct arm_pmu**, but
> -	 * the handlers expect a struct arm_pmu*. The percpu_irq framework will
> -	 * do any necessary shifting, we just need to perform the first
> -	 * dereference.
> -	 */
> -	armpmu = *(void **)dev;
> +	armpmu = this_cpu_read(cpu_armpmu);
> +	if (WARN_ON_ONCE(!armpmu))
> +		return IRQ_NONE;
>  
>  	start_clock = sched_clock();
>  	ret = armpmu->handle_irq(irq, armpmu);
> @@ -517,29 +516,47 @@ int perf_num_counters(void)
>  }
>  EXPORT_SYMBOL_GPL(perf_num_counters);
>  
> -void armpmu_free_irq(struct arm_pmu *armpmu, int cpu)
> +int armpmu_count_irq_users(const int irq)

This can be static.

>  {
> -	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
> -	int irq = per_cpu(hw_events->irq, cpu);
> +	int cpu, count = 0;
> +
> +	for_each_possible_cpu(cpu) {
> +		if (per_cpu(cpu_irq, cpu) == irq)
> +			count++;
> +	}
>  
> -	if (!cpumask_test_and_clear_cpu(cpu, &armpmu->active_irqs))
> +	return count;
> +}
> +
> +void armpmu_free_cpu_irq(int irq, int cpu)
> +{
> +	if (per_cpu(cpu_irq, cpu) == 0)
> +		return;
> +	if (WARN_ON(irq != per_cpu(cpu_irq, cpu)))
>  		return;
>  
>  	if (irq_is_percpu_devid(irq)) {
> -		free_percpu_irq(irq, &hw_events->percpu_pmu);
> -		cpumask_clear(&armpmu->active_irqs);
> -		return;
> +		if (armpmu_count_irq_users(irq) == 1)
> +			free_percpu_irq(irq, &cpu_armpmu);
> +	} else {
> +		free_irq(irq, NULL);
>  	}
>  
> -	free_irq(irq, per_cpu_ptr(&hw_events->percpu_pmu, cpu));
> +	per_cpu(cpu_irq, cpu) = 0;
>  }
>  
> -int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
> +void armpmu_free_irq(struct arm_pmu *armpmu, int cpu)
>  {
> -	int err = 0;
>  	struct pmu_hw_events __percpu *hw_events = armpmu->hw_events;
> -	const irq_handler_t handler = armpmu_dispatch_irq;
>  	int irq = per_cpu(hw_events->irq, cpu);
> +
> +	armpmu_free_cpu_irq(irq, cpu);
> +}
> +
> +int armpmu_request_cpu_irq(int irq, int cpu)
> +{
> +	int err = 0;
> +	const irq_handler_t handler = armpmu_dispatch_irq;
>  	if (!irq)
>  		return 0;
>  
> @@ -560,16 +577,16 @@ int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
>  
>  		irq_set_status_flags(irq, IRQ_NOAUTOEN);
>  		err = request_irq(irq, handler, irq_flags, "arm-pmu",
> -				  per_cpu_ptr(&hw_events->percpu_pmu, cpu));
> -	} else if (cpumask_empty(&armpmu->active_irqs)) {
> +				  NULL);
> +	} else if (armpmu_count_irq_users(irq) == 0) {
>  		err = request_percpu_irq(irq, handler, "arm-pmu",
> -					 &hw_events->percpu_pmu);
> +					 cpu_armpmu);

This should be &cpu_armpmu.

Would it be possible to pass &cpu_armpmu as the devid even in the normal
request_irq case and have the dispatcher just pass it through to the
underlying handler, rather than access cpu_armpmu directly?

Will

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCHv2 7/8] arm_pmu: note IRQs and PMUs per-cpu
  2018-02-14 13:11   ` Will Deacon
@ 2018-02-14 13:24     ` Mark Rutland
  2018-02-14 13:26       ` Will Deacon
  0 siblings, 1 reply; 26+ messages in thread
From: Mark Rutland @ 2018-02-14 13:24 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 14, 2018 at 01:11:41PM +0000, Will Deacon wrote:
> On Mon, Feb 05, 2018 at 04:42:01PM +0000, Mark Rutland wrote:
> > diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
> > index 72118e6f9122..023a8ebdace6 100644
> > --- a/drivers/perf/arm_pmu.c
> > +++ b/drivers/perf/arm_pmu.c
> > @@ -25,6 +25,9 @@
> >  
> >  #include <asm/irq_regs.h>
> >  
> > +static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
> > +static DEFINE_PER_CPU(int, cpu_irq);
> > +
> >  static int
> >  armpmu_map_cache_event(const unsigned (*cache_map)
> >  				      [PERF_COUNT_HW_CACHE_MAX]
> > @@ -325,13 +328,9 @@ static irqreturn_t armpmu_dispatch_irq(int irq, void *dev)
> >  	int ret;
> >  	u64 start_clock, finish_clock;
> >  
> > -	/*
> > -	 * we request the IRQ with a (possibly percpu) struct arm_pmu**, but
> > -	 * the handlers expect a struct arm_pmu*. The percpu_irq framework will
> > -	 * do any necessary shifting, we just need to perform the first
> > -	 * dereference.
> > -	 */
> > -	armpmu = *(void **)dev;
> > +	armpmu = this_cpu_read(cpu_armpmu);
> > +	if (WARN_ON_ONCE(!armpmu))
> > +		return IRQ_NONE;
> >  
> >  	start_clock = sched_clock();
> >  	ret = armpmu->handle_irq(irq, armpmu);
> > @@ -517,29 +516,47 @@ int perf_num_counters(void)
> >  }
> >  EXPORT_SYMBOL_GPL(perf_num_counters);
> >  
> > -void armpmu_free_irq(struct arm_pmu *armpmu, int cpu)
> > +int armpmu_count_irq_users(const int irq)
> 
> This can be static.

sure.

> > @@ -560,16 +577,16 @@ int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
> >  
> >  		irq_set_status_flags(irq, IRQ_NOAUTOEN);
> >  		err = request_irq(irq, handler, irq_flags, "arm-pmu",
> > -				  per_cpu_ptr(&hw_events->percpu_pmu, cpu));
> > -	} else if (cpumask_empty(&armpmu->active_irqs)) {
> > +				  NULL);
> > +	} else if (armpmu_count_irq_users(irq) == 0) {
> >  		err = request_percpu_irq(irq, handler, "arm-pmu",
> > -					 &hw_events->percpu_pmu);
> > +					 cpu_armpmu);
> 
> This should be &cpu_armpmu.

I'm not sure that's the case, given the way we statically define
cpu_armpmu, but I'll try to figure that out.

> Would it be possible to pass &cpu_armpmu as the devid even in the normal
> request_irq case and have the dispatcher just pass it through to the
> underlying handler, rather than access cpu_armpmu directly?

Do you mean so that the dispatcher takes a struct arm_pmu ** in all
cases?

Otherwise, we don't have the struct pmu * early enough in the ACPI case.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCHv2 7/8] arm_pmu: note IRQs and PMUs per-cpu
  2018-02-14 13:24     ` Mark Rutland
@ 2018-02-14 13:26       ` Will Deacon
  2018-02-14 13:45         ` Mark Rutland
  2018-02-14 18:22         ` Mark Rutland
  0 siblings, 2 replies; 26+ messages in thread
From: Will Deacon @ 2018-02-14 13:26 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 14, 2018 at 01:24:17PM +0000, Mark Rutland wrote:
> On Wed, Feb 14, 2018 at 01:11:41PM +0000, Will Deacon wrote:
> > On Mon, Feb 05, 2018 at 04:42:01PM +0000, Mark Rutland wrote:
> > > @@ -560,16 +577,16 @@ int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
> > >  
> > >  		irq_set_status_flags(irq, IRQ_NOAUTOEN);
> > >  		err = request_irq(irq, handler, irq_flags, "arm-pmu",
> > > -				  per_cpu_ptr(&hw_events->percpu_pmu, cpu));
> > > -	} else if (cpumask_empty(&armpmu->active_irqs)) {
> > > +				  NULL);
> > > +	} else if (armpmu_count_irq_users(irq) == 0) {
> > >  		err = request_percpu_irq(irq, handler, "arm-pmu",
> > > -					 &hw_events->percpu_pmu);
> > > +					 cpu_armpmu);
> > 
> > This should be &cpu_armpmu.
> 
> I'm not sure that's the case, given the way we statically define
> cpu_armpmu, but I'll try to figure that out.

Well that's what you pass to free_percpu_irq :p

> > Would it be possible to pass &cpu_armpmu as the devid even in the normal
> > request_irq case and have the dispatcher just pass it through to the
> > underlying handler, rather than access cpu_armpmu directly?
> 
> Do you mean so that the dispatcher takes a struct arm_pmu ** in all
> cases?

Yes.

Will

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCHv2 7/8] arm_pmu: note IRQs and PMUs per-cpu
  2018-02-14 13:26       ` Will Deacon
@ 2018-02-14 13:45         ` Mark Rutland
  2018-02-14 18:22         ` Mark Rutland
  1 sibling, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2018-02-14 13:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 14, 2018 at 01:26:41PM +0000, Will Deacon wrote:
> On Wed, Feb 14, 2018 at 01:24:17PM +0000, Mark Rutland wrote:
> > On Wed, Feb 14, 2018 at 01:11:41PM +0000, Will Deacon wrote:
> > > On Mon, Feb 05, 2018 at 04:42:01PM +0000, Mark Rutland wrote:
> > > > @@ -560,16 +577,16 @@ int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
> > > >  
> > > >  		irq_set_status_flags(irq, IRQ_NOAUTOEN);
> > > >  		err = request_irq(irq, handler, irq_flags, "arm-pmu",
> > > > -				  per_cpu_ptr(&hw_events->percpu_pmu, cpu));
> > > > -	} else if (cpumask_empty(&armpmu->active_irqs)) {
> > > > +				  NULL);
> > > > +	} else if (armpmu_count_irq_users(irq) == 0) {
> > > >  		err = request_percpu_irq(irq, handler, "arm-pmu",
> > > > -					 &hw_events->percpu_pmu);
> > > > +					 cpu_armpmu);
> > > 
> > > This should be &cpu_armpmu.
> > 
> > I'm not sure that's the case, given the way we statically define
> > cpu_armpmu, but I'll try to figure that out.
> 
> Well that's what you pass to free_percpu_irq :p

Ugh. One of the two is clearly wrnog...

> > > Would it be possible to pass &cpu_armpmu as the devid even in the normal
> > > request_irq case and have the dispatcher just pass it through to the
> > > underlying handler, rather than access cpu_armpmu directly?
> > 
> > Do you mean so that the dispatcher takes a struct arm_pmu ** in all
> > cases?
> 
> Yes.

Ok. I'll try to piece that together (which should allow us to revert
some changes to the dispatcher).

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCHv2 7/8] arm_pmu: note IRQs and PMUs per-cpu
  2018-02-14 13:26       ` Will Deacon
  2018-02-14 13:45         ` Mark Rutland
@ 2018-02-14 18:22         ` Mark Rutland
  1 sibling, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2018-02-14 18:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 14, 2018 at 01:26:41PM +0000, Will Deacon wrote:
> On Wed, Feb 14, 2018 at 01:24:17PM +0000, Mark Rutland wrote:
> > On Wed, Feb 14, 2018 at 01:11:41PM +0000, Will Deacon wrote:
> > > On Mon, Feb 05, 2018 at 04:42:01PM +0000, Mark Rutland wrote:
> > > > @@ -560,16 +577,16 @@ int armpmu_request_irq(struct arm_pmu *armpmu, int cpu)
> > > >  
> > > >  		irq_set_status_flags(irq, IRQ_NOAUTOEN);
> > > >  		err = request_irq(irq, handler, irq_flags, "arm-pmu",
> > > > -				  per_cpu_ptr(&hw_events->percpu_pmu, cpu));
> > > > -	} else if (cpumask_empty(&armpmu->active_irqs)) {
> > > > +				  NULL);
> > > > +	} else if (armpmu_count_irq_users(irq) == 0) {
> > > >  		err = request_percpu_irq(irq, handler, "arm-pmu",
> > > > -					 &hw_events->percpu_pmu);
> > > > +					 cpu_armpmu);
> > > 
> > > This should be &cpu_armpmu.
> > 
> > I'm not sure that's the case, given the way we statically define
> > cpu_armpmu, but I'll try to figure that out.
> 
> Well that's what you pass to free_percpu_irq :p

Ugh; the mismatch is clearly a bug...

> > > Would it be possible to pass &cpu_armpmu as the devid even in the normal
> > > request_irq case and have the dispatcher just pass it through to the
> > > underlying handler, rather than access cpu_armpmu directly?
> > 
> > Do you mean so that the dispatcher takes a struct arm_pmu ** in all
> > cases?
> 
> Yes.

Done, and pushed out to my arm64/acpi-pmu-lockdep branch.

Let me know if you want me to repost the series.

Mark.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 6/8] arm_pmu: explicitly enable/disable SPIs at hotplug
  2018-02-05 16:42 ` [PATCHv2 6/8] arm_pmu: explicitly enable/disable SPIs at hotplug Mark Rutland
@ 2018-02-26 15:16     ` Geert Uytterhoeven
  0 siblings, 0 replies; 26+ messages in thread
From: Geert Uytterhoeven @ 2018-02-26 15:16 UTC (permalink / raw)
  To: Mark Rutland; +Cc: Linux ARM, Linus Walleij, Will Deacon, Linux-Renesas

Hi Mark,

On Mon, Feb 5, 2018 at 5:42 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> To support ACPI systems, we need to request IRQs before CPUs are
> hotplugged, and thus we need to request IRQs before we know their
> associated PMU.
>
> This is problematic if a PMU IRQ is pending out of reset, as it may be
> taken before we know the PMU, and thus the IRQ handler won't be able to
> handle it, leaving it screaming.
>
> To avoid such problems, lets request all IRQs in a disabled state, and
> explicitly enable/disable them at hotplug time, when we're sure the PMU
> has been probed.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>

This is now commit 6de3f79112cc26bf in v4.16-rc3, and causes a BUG during
CPU offlining (e.g. during system suspend, or during boot with
CONFIG_ARM_PSCI_CHECKER=y).

With CONFIG_ARM_PSCI_CHECKER=y:

psci_checker: PSCI checker started using 6 CPUs
psci_checker: Starting hotplug tests
psci_checker: Trying to turn off and on again all CPUs
BUG: sleeping function called from invalid context at kernel/irq/manage.c:112
in_atomic(): 1, irqs_disabled(): 128, pid: 15, name: migration/1
no locks held by migration/1/15.
irq event stamp: 192
hardirqs last  enabled at (191): [<00000000803c2507>]
_raw_spin_unlock_irq+0x2c/0x4c
hardirqs last disabled at (192): [<000000007f57ad28>] multi_cpu_stop+0x9c/0x140
softirqs last  enabled at (0): [<0000000004ee1b58>]
copy_process.isra.77.part.78+0x43c/0x1504
softirqs last disabled at (0): [<          (null)>]           (null)
CPU: 1 PID: 15 Comm: migration/1 Not tainted 4.16.0-rc3-salvator-x #1651
Hardware name: Renesas Salvator-X board based on r8a7796 (DT)
Call trace:
 dump_backtrace+0x0/0x140
 show_stack+0x14/0x1c
 dump_stack+0xb4/0xf0
 ___might_sleep+0x1fc/0x218
 __might_sleep+0x70/0x80
 synchronize_irq+0x40/0xa8
 disable_irq+0x20/0x2c
 arm_perf_teardown_cpu+0x80/0xac
 cpuhp_invoke_callback+0x5a0/0xd18
 take_cpu_down+0x84/0xc4
 multi_cpu_stop+0xb0/0x140
 cpu_stopper_thread+0xbc/0x128
 smpboot_thread_fn+0x218/0x234
 kthread+0x11c/0x124
 ret_from_fork+0x10/0x18
CPU1: shutdown
psci: CPU1 killed.

Reverting that commit on v4.16-rc3 fixes the issue.

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCHv2 6/8] arm_pmu: explicitly enable/disable SPIs at hotplug
@ 2018-02-26 15:16     ` Geert Uytterhoeven
  0 siblings, 0 replies; 26+ messages in thread
From: Geert Uytterhoeven @ 2018-02-26 15:16 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Mark,

On Mon, Feb 5, 2018 at 5:42 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> To support ACPI systems, we need to request IRQs before CPUs are
> hotplugged, and thus we need to request IRQs before we know their
> associated PMU.
>
> This is problematic if a PMU IRQ is pending out of reset, as it may be
> taken before we know the PMU, and thus the IRQ handler won't be able to
> handle it, leaving it screaming.
>
> To avoid such problems, lets request all IRQs in a disabled state, and
> explicitly enable/disable them at hotplug time, when we're sure the PMU
> has been probed.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>

This is now commit 6de3f79112cc26bf in v4.16-rc3, and causes a BUG during
CPU offlining (e.g. during system suspend, or during boot with
CONFIG_ARM_PSCI_CHECKER=y).

With CONFIG_ARM_PSCI_CHECKER=y:

psci_checker: PSCI checker started using 6 CPUs
psci_checker: Starting hotplug tests
psci_checker: Trying to turn off and on again all CPUs
BUG: sleeping function called from invalid context at kernel/irq/manage.c:112
in_atomic(): 1, irqs_disabled(): 128, pid: 15, name: migration/1
no locks held by migration/1/15.
irq event stamp: 192
hardirqs last  enabled at (191): [<00000000803c2507>]
_raw_spin_unlock_irq+0x2c/0x4c
hardirqs last disabled at (192): [<000000007f57ad28>] multi_cpu_stop+0x9c/0x140
softirqs last  enabled at (0): [<0000000004ee1b58>]
copy_process.isra.77.part.78+0x43c/0x1504
softirqs last disabled at (0): [<          (null)>]           (null)
CPU: 1 PID: 15 Comm: migration/1 Not tainted 4.16.0-rc3-salvator-x #1651
Hardware name: Renesas Salvator-X board based on r8a7796 (DT)
Call trace:
 dump_backtrace+0x0/0x140
 show_stack+0x14/0x1c
 dump_stack+0xb4/0xf0
 ___might_sleep+0x1fc/0x218
 __might_sleep+0x70/0x80
 synchronize_irq+0x40/0xa8
 disable_irq+0x20/0x2c
 arm_perf_teardown_cpu+0x80/0xac
 cpuhp_invoke_callback+0x5a0/0xd18
 take_cpu_down+0x84/0xc4
 multi_cpu_stop+0xb0/0x140
 cpu_stopper_thread+0xbc/0x128
 smpboot_thread_fn+0x218/0x234
 kthread+0x11c/0x124
 ret_from_fork+0x10/0x18
CPU1: shutdown
psci: CPU1 killed.

Reverting that commit on v4.16-rc3 fixes the issue.

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert at linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 6/8] arm_pmu: explicitly enable/disable SPIs at hotplug
  2018-02-26 15:16     ` Geert Uytterhoeven
@ 2018-02-26 15:22       ` Will Deacon
  -1 siblings, 0 replies; 26+ messages in thread
From: Will Deacon @ 2018-02-26 15:22 UTC (permalink / raw)
  To: Geert Uytterhoeven; +Cc: Mark Rutland, Linux ARM, Linus Walleij, Linux-Renesas

On Mon, Feb 26, 2018 at 04:16:19PM +0100, Geert Uytterhoeven wrote:
> On Mon, Feb 5, 2018 at 5:42 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > To support ACPI systems, we need to request IRQs before CPUs are
> > hotplugged, and thus we need to request IRQs before we know their
> > associated PMU.
> >
> > This is problematic if a PMU IRQ is pending out of reset, as it may be
> > taken before we know the PMU, and thus the IRQ handler won't be able to
> > handle it, leaving it screaming.
> >
> > To avoid such problems, lets request all IRQs in a disabled state, and
> > explicitly enable/disable them at hotplug time, when we're sure the PMU
> > has been probed.
> >
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> 
> This is now commit 6de3f79112cc26bf in v4.16-rc3, and causes a BUG during
> CPU offlining (e.g. during system suspend, or during boot with
> CONFIG_ARM_PSCI_CHECKER=y).
> 
> With CONFIG_ARM_PSCI_CHECKER=y:
> 
> psci_checker: PSCI checker started using 6 CPUs
> psci_checker: Starting hotplug tests
> psci_checker: Trying to turn off and on again all CPUs
> BUG: sleeping function called from invalid context at kernel/irq/manage.c:112
> in_atomic(): 1, irqs_disabled(): 128, pid: 15, name: migration/1
> no locks held by migration/1/15.
> irq event stamp: 192
> hardirqs last  enabled at (191): [<00000000803c2507>]
> _raw_spin_unlock_irq+0x2c/0x4c
> hardirqs last disabled at (192): [<000000007f57ad28>] multi_cpu_stop+0x9c/0x140
> softirqs last  enabled at (0): [<0000000004ee1b58>]
> copy_process.isra.77.part.78+0x43c/0x1504
> softirqs last disabled at (0): [<          (null)>]           (null)
> CPU: 1 PID: 15 Comm: migration/1 Not tainted 4.16.0-rc3-salvator-x #1651
> Hardware name: Renesas Salvator-X board based on r8a7796 (DT)
> Call trace:
>  dump_backtrace+0x0/0x140
>  show_stack+0x14/0x1c
>  dump_stack+0xb4/0xf0
>  ___might_sleep+0x1fc/0x218
>  __might_sleep+0x70/0x80
>  synchronize_irq+0x40/0xa8
>  disable_irq+0x20/0x2c

Given that these things are CPU-affine, I reckon this should be
disable_irq_nosync. Mark?

Will

--->8

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 0c2ed11c0603..f63db346c219 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -638,7 +638,7 @@ static int arm_perf_teardown_cpu(unsigned int cpu, struct hlist_node *node)
 		if (irq_is_percpu_devid(irq))
 			disable_percpu_irq(irq);
 		else
-			disable_irq(irq);
+			disable_irq_nosync(irq);
 	}
 
 	per_cpu(cpu_armpmu, cpu) = NULL;

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 6/8] arm_pmu: explicitly enable/disable SPIs at hotplug
@ 2018-02-26 15:22       ` Will Deacon
  0 siblings, 0 replies; 26+ messages in thread
From: Will Deacon @ 2018-02-26 15:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 26, 2018 at 04:16:19PM +0100, Geert Uytterhoeven wrote:
> On Mon, Feb 5, 2018 at 5:42 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > To support ACPI systems, we need to request IRQs before CPUs are
> > hotplugged, and thus we need to request IRQs before we know their
> > associated PMU.
> >
> > This is problematic if a PMU IRQ is pending out of reset, as it may be
> > taken before we know the PMU, and thus the IRQ handler won't be able to
> > handle it, leaving it screaming.
> >
> > To avoid such problems, lets request all IRQs in a disabled state, and
> > explicitly enable/disable them at hotplug time, when we're sure the PMU
> > has been probed.
> >
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> 
> This is now commit 6de3f79112cc26bf in v4.16-rc3, and causes a BUG during
> CPU offlining (e.g. during system suspend, or during boot with
> CONFIG_ARM_PSCI_CHECKER=y).
> 
> With CONFIG_ARM_PSCI_CHECKER=y:
> 
> psci_checker: PSCI checker started using 6 CPUs
> psci_checker: Starting hotplug tests
> psci_checker: Trying to turn off and on again all CPUs
> BUG: sleeping function called from invalid context at kernel/irq/manage.c:112
> in_atomic(): 1, irqs_disabled(): 128, pid: 15, name: migration/1
> no locks held by migration/1/15.
> irq event stamp: 192
> hardirqs last  enabled at (191): [<00000000803c2507>]
> _raw_spin_unlock_irq+0x2c/0x4c
> hardirqs last disabled at (192): [<000000007f57ad28>] multi_cpu_stop+0x9c/0x140
> softirqs last  enabled at (0): [<0000000004ee1b58>]
> copy_process.isra.77.part.78+0x43c/0x1504
> softirqs last disabled at (0): [<          (null)>]           (null)
> CPU: 1 PID: 15 Comm: migration/1 Not tainted 4.16.0-rc3-salvator-x #1651
> Hardware name: Renesas Salvator-X board based on r8a7796 (DT)
> Call trace:
>  dump_backtrace+0x0/0x140
>  show_stack+0x14/0x1c
>  dump_stack+0xb4/0xf0
>  ___might_sleep+0x1fc/0x218
>  __might_sleep+0x70/0x80
>  synchronize_irq+0x40/0xa8
>  disable_irq+0x20/0x2c

Given that these things are CPU-affine, I reckon this should be
disable_irq_nosync. Mark?

Will

--->8

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 0c2ed11c0603..f63db346c219 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -638,7 +638,7 @@ static int arm_perf_teardown_cpu(unsigned int cpu, struct hlist_node *node)
 		if (irq_is_percpu_devid(irq))
 			disable_percpu_irq(irq);
 		else
-			disable_irq(irq);
+			disable_irq_nosync(irq);
 	}
 
 	per_cpu(cpu_armpmu, cpu) = NULL;

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 6/8] arm_pmu: explicitly enable/disable SPIs at hotplug
  2018-02-26 15:22       ` Will Deacon
@ 2018-02-26 15:56         ` Geert Uytterhoeven
  -1 siblings, 0 replies; 26+ messages in thread
From: Geert Uytterhoeven @ 2018-02-26 15:56 UTC (permalink / raw)
  To: Will Deacon; +Cc: Mark Rutland, Linux ARM, Linus Walleij, Linux-Renesas

Hi Will,

On Mon, Feb 26, 2018 at 4:22 PM, Will Deacon <will.deacon@arm.com> wrote:
> On Mon, Feb 26, 2018 at 04:16:19PM +0100, Geert Uytterhoeven wrote:
>> On Mon, Feb 5, 2018 at 5:42 PM, Mark Rutland <mark.rutland@arm.com> wrote:
>> > To support ACPI systems, we need to request IRQs before CPUs are
>> > hotplugged, and thus we need to request IRQs before we know their
>> > associated PMU.
>> >
>> > This is problematic if a PMU IRQ is pending out of reset, as it may be
>> > taken before we know the PMU, and thus the IRQ handler won't be able to
>> > handle it, leaving it screaming.
>> >
>> > To avoid such problems, lets request all IRQs in a disabled state, and
>> > explicitly enable/disable them at hotplug time, when we're sure the PMU
>> > has been probed.
>> >
>> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>>
>> This is now commit 6de3f79112cc26bf in v4.16-rc3, and causes a BUG during
>> CPU offlining (e.g. during system suspend, or during boot with
>> CONFIG_ARM_PSCI_CHECKER=y).
>>
>> With CONFIG_ARM_PSCI_CHECKER=y:
>>
>> psci_checker: PSCI checker started using 6 CPUs
>> psci_checker: Starting hotplug tests
>> psci_checker: Trying to turn off and on again all CPUs
>> BUG: sleeping function called from invalid context at kernel/irq/manage.c:112
>> in_atomic(): 1, irqs_disabled(): 128, pid: 15, name: migration/1
>> no locks held by migration/1/15.
>> irq event stamp: 192
>> hardirqs last  enabled at (191): [<00000000803c2507>]
>> _raw_spin_unlock_irq+0x2c/0x4c
>> hardirqs last disabled at (192): [<000000007f57ad28>] multi_cpu_stop+0x9c/0x140
>> softirqs last  enabled at (0): [<0000000004ee1b58>]
>> copy_process.isra.77.part.78+0x43c/0x1504
>> softirqs last disabled at (0): [<          (null)>]           (null)
>> CPU: 1 PID: 15 Comm: migration/1 Not tainted 4.16.0-rc3-salvator-x #1651
>> Hardware name: Renesas Salvator-X board based on r8a7796 (DT)
>> Call trace:
>>  dump_backtrace+0x0/0x140
>>  show_stack+0x14/0x1c
>>  dump_stack+0xb4/0xf0
>>  ___might_sleep+0x1fc/0x218
>>  __might_sleep+0x70/0x80
>>  synchronize_irq+0x40/0xa8
>>  disable_irq+0x20/0x2c
>
> Given that these things are CPU-affine, I reckon this should be
> disable_irq_nosync. Mark?
>
> Will
>
> --->8
>
> diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
> index 0c2ed11c0603..f63db346c219 100644
> --- a/drivers/perf/arm_pmu.c
> +++ b/drivers/perf/arm_pmu.c
> @@ -638,7 +638,7 @@ static int arm_perf_teardown_cpu(unsigned int cpu, struct hlist_node *node)
>                 if (irq_is_percpu_devid(irq))
>                         disable_percpu_irq(irq);
>                 else
> -                       disable_irq(irq);
> +                       disable_irq_nosync(irq);
>         }
>
>         per_cpu(cpu_armpmu, cpu) = NULL;

Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>

Thanks!

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCHv2 6/8] arm_pmu: explicitly enable/disable SPIs at hotplug
@ 2018-02-26 15:56         ` Geert Uytterhoeven
  0 siblings, 0 replies; 26+ messages in thread
From: Geert Uytterhoeven @ 2018-02-26 15:56 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Will,

On Mon, Feb 26, 2018 at 4:22 PM, Will Deacon <will.deacon@arm.com> wrote:
> On Mon, Feb 26, 2018 at 04:16:19PM +0100, Geert Uytterhoeven wrote:
>> On Mon, Feb 5, 2018 at 5:42 PM, Mark Rutland <mark.rutland@arm.com> wrote:
>> > To support ACPI systems, we need to request IRQs before CPUs are
>> > hotplugged, and thus we need to request IRQs before we know their
>> > associated PMU.
>> >
>> > This is problematic if a PMU IRQ is pending out of reset, as it may be
>> > taken before we know the PMU, and thus the IRQ handler won't be able to
>> > handle it, leaving it screaming.
>> >
>> > To avoid such problems, lets request all IRQs in a disabled state, and
>> > explicitly enable/disable them at hotplug time, when we're sure the PMU
>> > has been probed.
>> >
>> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>>
>> This is now commit 6de3f79112cc26bf in v4.16-rc3, and causes a BUG during
>> CPU offlining (e.g. during system suspend, or during boot with
>> CONFIG_ARM_PSCI_CHECKER=y).
>>
>> With CONFIG_ARM_PSCI_CHECKER=y:
>>
>> psci_checker: PSCI checker started using 6 CPUs
>> psci_checker: Starting hotplug tests
>> psci_checker: Trying to turn off and on again all CPUs
>> BUG: sleeping function called from invalid context at kernel/irq/manage.c:112
>> in_atomic(): 1, irqs_disabled(): 128, pid: 15, name: migration/1
>> no locks held by migration/1/15.
>> irq event stamp: 192
>> hardirqs last  enabled at (191): [<00000000803c2507>]
>> _raw_spin_unlock_irq+0x2c/0x4c
>> hardirqs last disabled at (192): [<000000007f57ad28>] multi_cpu_stop+0x9c/0x140
>> softirqs last  enabled at (0): [<0000000004ee1b58>]
>> copy_process.isra.77.part.78+0x43c/0x1504
>> softirqs last disabled at (0): [<          (null)>]           (null)
>> CPU: 1 PID: 15 Comm: migration/1 Not tainted 4.16.0-rc3-salvator-x #1651
>> Hardware name: Renesas Salvator-X board based on r8a7796 (DT)
>> Call trace:
>>  dump_backtrace+0x0/0x140
>>  show_stack+0x14/0x1c
>>  dump_stack+0xb4/0xf0
>>  ___might_sleep+0x1fc/0x218
>>  __might_sleep+0x70/0x80
>>  synchronize_irq+0x40/0xa8
>>  disable_irq+0x20/0x2c
>
> Given that these things are CPU-affine, I reckon this should be
> disable_irq_nosync. Mark?
>
> Will
>
> --->8
>
> diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
> index 0c2ed11c0603..f63db346c219 100644
> --- a/drivers/perf/arm_pmu.c
> +++ b/drivers/perf/arm_pmu.c
> @@ -638,7 +638,7 @@ static int arm_perf_teardown_cpu(unsigned int cpu, struct hlist_node *node)
>                 if (irq_is_percpu_devid(irq))
>                         disable_percpu_irq(irq);
>                 else
> -                       disable_irq(irq);
> +                       disable_irq_nosync(irq);
>         }
>
>         per_cpu(cpu_armpmu, cpu) = NULL;

Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>

Thanks!

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert at linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 6/8] arm_pmu: explicitly enable/disable SPIs at hotplug
  2018-02-26 15:22       ` Will Deacon
@ 2018-02-26 16:01         ` Mark Rutland
  -1 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2018-02-26 16:01 UTC (permalink / raw)
  To: Will Deacon; +Cc: Geert Uytterhoeven, Linux ARM, Linus Walleij, Linux-Renesas

On Mon, Feb 26, 2018 at 03:22:53PM +0000, Will Deacon wrote:
> On Mon, Feb 26, 2018 at 04:16:19PM +0100, Geert Uytterhoeven wrote:
> > On Mon, Feb 5, 2018 at 5:42 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > > To support ACPI systems, we need to request IRQs before CPUs are
> > > hotplugged, and thus we need to request IRQs before we know their
> > > associated PMU.
> > >
> > > This is problematic if a PMU IRQ is pending out of reset, as it may be
> > > taken before we know the PMU, and thus the IRQ handler won't be able to
> > > handle it, leaving it screaming.
> > >
> > > To avoid such problems, lets request all IRQs in a disabled state, and
> > > explicitly enable/disable them at hotplug time, when we're sure the PMU
> > > has been probed.
> > >
> > > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > 
> > This is now commit 6de3f79112cc26bf in v4.16-rc3, and causes a BUG during
> > CPU offlining (e.g. during system suspend, or during boot with
> > CONFIG_ARM_PSCI_CHECKER=y).
> > 
> > With CONFIG_ARM_PSCI_CHECKER=y:
> > 
> > psci_checker: PSCI checker started using 6 CPUs
> > psci_checker: Starting hotplug tests
> > psci_checker: Trying to turn off and on again all CPUs
> > BUG: sleeping function called from invalid context at kernel/irq/manage.c:112
> > in_atomic(): 1, irqs_disabled(): 128, pid: 15, name: migration/1
> > no locks held by migration/1/15.
> > irq event stamp: 192
> > hardirqs last  enabled at (191): [<00000000803c2507>]
> > _raw_spin_unlock_irq+0x2c/0x4c
> > hardirqs last disabled at (192): [<000000007f57ad28>] multi_cpu_stop+0x9c/0x140
> > softirqs last  enabled at (0): [<0000000004ee1b58>]
> > copy_process.isra.77.part.78+0x43c/0x1504
> > softirqs last disabled at (0): [<          (null)>]           (null)
> > CPU: 1 PID: 15 Comm: migration/1 Not tainted 4.16.0-rc3-salvator-x #1651
> > Hardware name: Renesas Salvator-X board based on r8a7796 (DT)
> > Call trace:
> >  dump_backtrace+0x0/0x140
> >  show_stack+0x14/0x1c
> >  dump_stack+0xb4/0xf0
> >  ___might_sleep+0x1fc/0x218
> >  __might_sleep+0x70/0x80
> >  synchronize_irq+0x40/0xa8
> >  disable_irq+0x20/0x2c
> 
> Given that these things are CPU-affine, I reckon this should be
> disable_irq_nosync. Mark?

Given IRQs are disabled, this should be fine, yes.

FWIW, if you spin this as a patch:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> 
> Will
> 
> --->8
> 
> diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
> index 0c2ed11c0603..f63db346c219 100644
> --- a/drivers/perf/arm_pmu.c
> +++ b/drivers/perf/arm_pmu.c
> @@ -638,7 +638,7 @@ static int arm_perf_teardown_cpu(unsigned int cpu, struct hlist_node *node)
>  		if (irq_is_percpu_devid(irq))
>  			disable_percpu_irq(irq);
>  		else
> -			disable_irq(irq);
> +			disable_irq_nosync(irq);
>  	}
>  
>  	per_cpu(cpu_armpmu, cpu) = NULL;

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCHv2 6/8] arm_pmu: explicitly enable/disable SPIs at hotplug
@ 2018-02-26 16:01         ` Mark Rutland
  0 siblings, 0 replies; 26+ messages in thread
From: Mark Rutland @ 2018-02-26 16:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 26, 2018 at 03:22:53PM +0000, Will Deacon wrote:
> On Mon, Feb 26, 2018 at 04:16:19PM +0100, Geert Uytterhoeven wrote:
> > On Mon, Feb 5, 2018 at 5:42 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > > To support ACPI systems, we need to request IRQs before CPUs are
> > > hotplugged, and thus we need to request IRQs before we know their
> > > associated PMU.
> > >
> > > This is problematic if a PMU IRQ is pending out of reset, as it may be
> > > taken before we know the PMU, and thus the IRQ handler won't be able to
> > > handle it, leaving it screaming.
> > >
> > > To avoid such problems, lets request all IRQs in a disabled state, and
> > > explicitly enable/disable them at hotplug time, when we're sure the PMU
> > > has been probed.
> > >
> > > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > 
> > This is now commit 6de3f79112cc26bf in v4.16-rc3, and causes a BUG during
> > CPU offlining (e.g. during system suspend, or during boot with
> > CONFIG_ARM_PSCI_CHECKER=y).
> > 
> > With CONFIG_ARM_PSCI_CHECKER=y:
> > 
> > psci_checker: PSCI checker started using 6 CPUs
> > psci_checker: Starting hotplug tests
> > psci_checker: Trying to turn off and on again all CPUs
> > BUG: sleeping function called from invalid context at kernel/irq/manage.c:112
> > in_atomic(): 1, irqs_disabled(): 128, pid: 15, name: migration/1
> > no locks held by migration/1/15.
> > irq event stamp: 192
> > hardirqs last  enabled at (191): [<00000000803c2507>]
> > _raw_spin_unlock_irq+0x2c/0x4c
> > hardirqs last disabled at (192): [<000000007f57ad28>] multi_cpu_stop+0x9c/0x140
> > softirqs last  enabled at (0): [<0000000004ee1b58>]
> > copy_process.isra.77.part.78+0x43c/0x1504
> > softirqs last disabled at (0): [<          (null)>]           (null)
> > CPU: 1 PID: 15 Comm: migration/1 Not tainted 4.16.0-rc3-salvator-x #1651
> > Hardware name: Renesas Salvator-X board based on r8a7796 (DT)
> > Call trace:
> >  dump_backtrace+0x0/0x140
> >  show_stack+0x14/0x1c
> >  dump_stack+0xb4/0xf0
> >  ___might_sleep+0x1fc/0x218
> >  __might_sleep+0x70/0x80
> >  synchronize_irq+0x40/0xa8
> >  disable_irq+0x20/0x2c
> 
> Given that these things are CPU-affine, I reckon this should be
> disable_irq_nosync. Mark?

Given IRQs are disabled, this should be fine, yes.

FWIW, if you spin this as a patch:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> 
> Will
> 
> --->8
> 
> diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
> index 0c2ed11c0603..f63db346c219 100644
> --- a/drivers/perf/arm_pmu.c
> +++ b/drivers/perf/arm_pmu.c
> @@ -638,7 +638,7 @@ static int arm_perf_teardown_cpu(unsigned int cpu, struct hlist_node *node)
>  		if (irq_is_percpu_devid(irq))
>  			disable_percpu_irq(irq);
>  		else
> -			disable_irq(irq);
> +			disable_irq_nosync(irq);
>  	}
>  
>  	per_cpu(cpu_armpmu, cpu) = NULL;

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2018-02-26 16:01 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-05 16:41 [PATCHv2 0/8] arm_pmu: fix lockdep issues with ACPI systems Mark Rutland
2018-02-05 16:41 ` [PATCHv2 1/8] ARM: ux500: remove PMU IRQ bouncer Mark Rutland
2018-02-05 19:05   ` Linus Walleij
2018-02-06 11:26     ` Mark Rutland
2018-02-05 16:41 ` [PATCHv2 2/8] arm_pmu: kill arm_pmu_platdata Mark Rutland
2018-02-05 16:41 ` [PATCHv2 3/8] arm_pmu: fold platform helpers into platform code Mark Rutland
2018-02-05 16:41 ` [PATCHv2 4/8] arm_pmu: add armpmu_alloc_atomic() Mark Rutland
2018-02-05 16:41 ` [PATCHv2 5/8] arm_pmu: acpi: check for mismatched PPIs Mark Rutland
2018-02-05 16:42 ` [PATCHv2 6/8] arm_pmu: explicitly enable/disable SPIs at hotplug Mark Rutland
2018-02-26 15:16   ` Geert Uytterhoeven
2018-02-26 15:16     ` Geert Uytterhoeven
2018-02-26 15:22     ` Will Deacon
2018-02-26 15:22       ` Will Deacon
2018-02-26 15:56       ` Geert Uytterhoeven
2018-02-26 15:56         ` Geert Uytterhoeven
2018-02-26 16:01       ` Mark Rutland
2018-02-26 16:01         ` Mark Rutland
2018-02-05 16:42 ` [PATCHv2 7/8] arm_pmu: note IRQs and PMUs per-cpu Mark Rutland
2018-02-05 17:07   ` Robin Murphy
2018-02-05 17:13     ` Mark Rutland
2018-02-14 13:11   ` Will Deacon
2018-02-14 13:24     ` Mark Rutland
2018-02-14 13:26       ` Will Deacon
2018-02-14 13:45         ` Mark Rutland
2018-02-14 18:22         ` Mark Rutland
2018-02-05 16:42 ` [PATCHv2 8/8] arm_pmu: acpi: request IRQs up-front Mark Rutland

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.