linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt
@ 2019-07-17  8:17 Julien Thierry
  2019-07-17  8:17 ` [PATCH v4 1/9] arm64: perf: avoid PMXEV* indirection Julien Thierry
                   ` (10 more replies)
  0 siblings, 11 replies; 24+ messages in thread
From: Julien Thierry @ 2019-07-17  8:17 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, Julien Thierry, peterz, jolsa, will.deacon, acme,
	alexander.shishkin, mingo, namhyung, sthotton, liwei391

Hi,

After fixing the arm64 Pseudo-NMIs, I'm dusting off this series.

The series makes the arm_pmu driver use NMIs for the perf interrupt when
NMIs are available on the platform (currently, only arm64 + GICv3).

* Patches 1 to 4 remove the need to use spinlocks for the Arm PMU
  driver for Armv7 and Armv8 (aarch64).
* Patches 5 moves the locking to Armv6 specific code which is the sole
  user
* Patches 6 and 7 make the PMU interrupt handler NMI-safe
* Patches 8 and 9 enable using pseudo-NMI for the PMU interrupt when
  the feature is available

Changes since v3[3]:
- Added tags
- Fix build issue for perf_event_v6
- Don't disable preemption in pmu->enable()
- Always rely on IPI_IRQ_WORK to run the queued work
- Fixed typos + cleanups

Changes since v2[2]:
- Rebased on recent linux-next (next-20190708)
- Fixed a number of bugs with indices (reported by Wei)
- Minor style fixes

Changes since v1[3]:
- Rebased on v5.1-rc1
- Pseudo-NMI has changed a lot since then, use the (now merged) NMI API
- Remove locking from armv7 perf_event
- Use locking only in armv6 perf_event
- Use direct counter/type registers insted of selector register for armv8

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2019-March/640536.html
[2] http://lists.infradead.org/pipermail/linux-arm-kernel/2018-January/554611.html

Cheers,

Julien

-->

Julien Thierry (8):
  arm64: perf: Remove PMU locking
  arm: perf: save/resore pmsel
  arm: perf: Remove Remove PMU locking
  perf/arm_pmu: Move PMU lock to ARMv6 events
  arm64: perf: Do not call irq_work_run in NMI context
  arm/arm64: kvm: pmu: Make overflow handler NMI safe
  arm_pmu: Introduce pmu_irq_ops
  arm_pmu: Use NMIs for PMU

Mark Rutland (1):
  arm64: perf: avoid PMXEV* indirection

 arch/arm/kernel/perf_event_v6.c |  43 +++++++-----
 arch/arm/kernel/perf_event_v7.c |  79 +++++++---------------
 arch/arm64/kernel/perf_event.c  | 136 ++++++++++++++++++++++++--------------
 drivers/perf/arm_pmu.c          | 143 ++++++++++++++++++++++++++++++++++------
 include/kvm/arm_pmu.h           |   1 +
 include/linux/perf/arm_pmu.h    |   5 --
 virt/kvm/arm/pmu.c              |  25 ++++++-
 7 files changed, 284 insertions(+), 148 deletions(-)

--
1.9.1

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v4 1/9] arm64: perf: avoid PMXEV* indirection
  2019-07-17  8:17 [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt Julien Thierry
@ 2019-07-17  8:17 ` Julien Thierry
  2019-07-17  8:17 ` [PATCH v4 2/9] arm64: perf: Remove PMU locking Julien Thierry
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 24+ messages in thread
From: Julien Thierry @ 2019-07-17  8:17 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, Julien Thierry, peterz, jolsa, will.deacon, acme,
	alexander.shishkin, mingo, Catalin Marinas, namhyung, sthotton,
	liwei391

From: Mark Rutland <mark.rutland@arm.com>

Currently we access the counter registers and their respective type
registers indirectly. This requires us to write to PMSELR, issue an ISB,
then access the relevant PMXEV* registers.

This is unfortunate, because:

* Under virtualization, accessing one registers requires two traps to
  the hypervisor, even though we could access the register directly with
  a single trap.

* We have to issue an ISB which we could otherwise avoid the cost of.

* When we use NMIs, the NMI handler will have to save/restore the select
  register in case the code it preempted was attempting to access a
  counter or its type register.

We can avoid these issues by directly accessing the relevant registers.
This patch adds helpers to do so.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[Julien T.: Don't inline read/write functions to avoid big code-size
	increase, remove unused read_pmevtypern function,
	fix counter index issue.]
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Shijith Thotton <sthotton@marvell.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/kernel/perf_event.c | 92 ++++++++++++++++++++++++++++++++++++------
 1 file changed, 79 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 96e90e2..838758f 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -369,6 +369,73 @@ static inline bool armv8pmu_event_is_chained(struct perf_event *event)
 #define	ARMV8_IDX_TO_COUNTER(x)	\
 	(((x) - ARMV8_IDX_COUNTER0) & ARMV8_PMU_COUNTER_MASK)

+/*
+ * This code is really good
+ */
+
+#define PMEVN_CASE(n, case_macro) \
+	case n: case_macro(n); break
+
+#define PMEVN_SWITCH(x, case_macro)				\
+	do {							\
+		switch (x) {					\
+		PMEVN_CASE(0,  case_macro);			\
+		PMEVN_CASE(1,  case_macro);			\
+		PMEVN_CASE(2,  case_macro);			\
+		PMEVN_CASE(3,  case_macro);			\
+		PMEVN_CASE(4,  case_macro);			\
+		PMEVN_CASE(5,  case_macro);			\
+		PMEVN_CASE(6,  case_macro);			\
+		PMEVN_CASE(7,  case_macro);			\
+		PMEVN_CASE(8,  case_macro);			\
+		PMEVN_CASE(9,  case_macro);			\
+		PMEVN_CASE(10, case_macro);			\
+		PMEVN_CASE(11, case_macro);			\
+		PMEVN_CASE(12, case_macro);			\
+		PMEVN_CASE(13, case_macro);			\
+		PMEVN_CASE(14, case_macro);			\
+		PMEVN_CASE(15, case_macro);			\
+		PMEVN_CASE(16, case_macro);			\
+		PMEVN_CASE(17, case_macro);			\
+		PMEVN_CASE(18, case_macro);			\
+		PMEVN_CASE(19, case_macro);			\
+		PMEVN_CASE(20, case_macro);			\
+		PMEVN_CASE(21, case_macro);			\
+		PMEVN_CASE(22, case_macro);			\
+		PMEVN_CASE(23, case_macro);			\
+		PMEVN_CASE(24, case_macro);			\
+		PMEVN_CASE(25, case_macro);			\
+		PMEVN_CASE(26, case_macro);			\
+		PMEVN_CASE(27, case_macro);			\
+		PMEVN_CASE(28, case_macro);			\
+		PMEVN_CASE(29, case_macro);			\
+		PMEVN_CASE(30, case_macro);			\
+		default: WARN(1, "Invalid PMEV* index");	\
+		}						\
+	} while (0)
+
+#define RETURN_READ_PMEVCNTRN(n) \
+	return read_sysreg(pmevcntr##n##_el0);
+static unsigned long read_pmevcntrn(int n)
+{
+	PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
+	return 0;
+}
+
+#define WRITE_PMEVCNTRN(n) \
+	write_sysreg(val, pmevcntr##n##_el0);
+static void write_pmevcntrn(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
+}
+
+#define WRITE_PMEVTYPERN(n) \
+	write_sysreg(val, pmevtyper##n##_el0);
+static void write_pmevtypern(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
+}
+
 static inline u32 armv8pmu_pmcr_read(void)
 {
 	return read_sysreg(pmcr_el0);
@@ -397,17 +464,11 @@ static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx)
 	return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx));
 }

-static inline void armv8pmu_select_counter(int idx)
+static inline u32 armv8pmu_read_evcntr(int idx)
 {
 	u32 counter = ARMV8_IDX_TO_COUNTER(idx);
-	write_sysreg(counter, pmselr_el0);
-	isb();
-}

-static inline u32 armv8pmu_read_evcntr(int idx)
-{
-	armv8pmu_select_counter(idx);
-	return read_sysreg(pmxevcntr_el0);
+	return read_pmevcntrn(counter);
 }

 static inline u64 armv8pmu_read_hw_counter(struct perf_event *event)
@@ -441,8 +502,9 @@ static u64 armv8pmu_read_counter(struct perf_event *event)

 static inline void armv8pmu_write_evcntr(int idx, u32 value)
 {
-	armv8pmu_select_counter(idx);
-	write_sysreg(value, pmxevcntr_el0);
+	u32 counter = ARMV8_IDX_TO_COUNTER(idx);
+
+	write_pmevcntrn(counter, value);
 }

 static inline void armv8pmu_write_hw_counter(struct perf_event *event,
@@ -483,9 +545,10 @@ static void armv8pmu_write_counter(struct perf_event *event, u64 value)

 static inline void armv8pmu_write_evtype(int idx, u32 val)
 {
-	armv8pmu_select_counter(idx);
+	u32 counter = ARMV8_IDX_TO_COUNTER(idx);
+
 	val &= ARMV8_PMU_EVTYPE_MASK;
-	write_sysreg(val, pmxevtyper_el0);
+	write_pmevtypern(counter, val);
 }

 static inline void armv8pmu_write_event_type(struct perf_event *event)
@@ -505,7 +568,10 @@ static inline void armv8pmu_write_event_type(struct perf_event *event)
 		armv8pmu_write_evtype(idx - 1, hwc->config_base);
 		armv8pmu_write_evtype(idx, chain_evt);
 	} else {
-		armv8pmu_write_evtype(idx, hwc->config_base);
+		if (idx == ARMV8_IDX_CYCLE_COUNTER)
+			write_sysreg(hwc->config_base, pmccfiltr_el0);
+		else
+			armv8pmu_write_evtype(idx, hwc->config_base);
 	}
 }

--
1.9.1

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 2/9] arm64: perf: Remove PMU locking
  2019-07-17  8:17 [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt Julien Thierry
  2019-07-17  8:17 ` [PATCH v4 1/9] arm64: perf: avoid PMXEV* indirection Julien Thierry
@ 2019-07-17  8:17 ` Julien Thierry
  2019-08-01 12:58   ` Will Deacon
  2019-07-17  8:17 ` [PATCH v4 3/9] arm: perf: save/resore pmsel Julien Thierry
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Julien Thierry @ 2019-07-17  8:17 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, Julien Thierry, peterz, jolsa, will.deacon, acme,
	alexander.shishkin, mingo, Catalin Marinas, namhyung, sthotton,
	liwei391

Since the PMU driver uses direct registers for counter
setup/manipulation, locking around these operations is no longer needed.

For operations that can be called with interrupts enabled, preemption
still needs to be disabled to ensure the programming of the PMU is
done on the expected CPU and not migrated mid-programming.

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/kernel/perf_event.c | 30 ++----------------------------
 1 file changed, 2 insertions(+), 28 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 838758f..0e2cf5d 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -673,15 +673,10 @@ static inline u32 armv8pmu_getreset_flags(void)

 static void armv8pmu_enable_event(struct perf_event *event)
 {
-	unsigned long flags;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
 	/*
 	 * Enable counter and interrupt, and set the counter to count
 	 * the event that we're interested in.
 	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);

 	/*
 	 * Disable counter
@@ -702,21 +697,10 @@ static void armv8pmu_enable_event(struct perf_event *event)
 	 * Enable counter
 	 */
 	armv8pmu_enable_event_counter(event);
-
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }

 static void armv8pmu_disable_event(struct perf_event *event)
 {
-	unsigned long flags;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	/*
-	 * Disable counter and interrupt
-	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
-
 	/*
 	 * Disable counter
 	 */
@@ -726,30 +710,20 @@ static void armv8pmu_disable_event(struct perf_event *event)
 	 * Disable interrupt for this counter
 	 */
 	armv8pmu_disable_event_irq(event);
-
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }

 static void armv8pmu_start(struct arm_pmu *cpu_pmu)
 {
-	unsigned long flags;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+	WARN_ON_ONCE(preemptible());
 	/* Enable all counters */
 	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }

 static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
 {
-	unsigned long flags;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+	WARN_ON_ONCE(preemptible());
 	/* Disable all counters */
 	armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }

 static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
--
1.9.1

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 3/9] arm: perf: save/resore pmsel
  2019-07-17  8:17 [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt Julien Thierry
  2019-07-17  8:17 ` [PATCH v4 1/9] arm64: perf: avoid PMXEV* indirection Julien Thierry
  2019-07-17  8:17 ` [PATCH v4 2/9] arm64: perf: Remove PMU locking Julien Thierry
@ 2019-07-17  8:17 ` Julien Thierry
  2019-08-01 13:01   ` Will Deacon
  2019-07-17  8:17 ` [PATCH v4 4/9] arm: perf: Remove Remove PMU locking Julien Thierry
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Julien Thierry @ 2019-07-17  8:17 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, Julien Thierry, peterz, jolsa, will.deacon,
	Russell King, acme, alexander.shishkin, mingo, stable, namhyung,
	sthotton, liwei391

The callback pmu->read() can be called with interrupts enabled.
Currently, on ARM, this can cause the following callchain:

armpmu_read() -> armpmu_event_update() -> armv7pmu_read_counter()

The last function might modify the counter selector register and then
read the target counter, without taking any lock. With interrupts
enabled, a PMU interrupt could occur and modify the selector register
as well, between the selection and read of the interrupted context.

Save and restore the value of the selector register in the PMU interrupt
handler, ensuring the interrupted context is left with the correct PMU
registers selected.

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: stable@vger.kernel.org
---
 arch/arm/kernel/perf_event_v7.c | 25 +++++++++++++++++++++++--
 1 file changed, 23 insertions(+), 2 deletions(-)

diff --git a/arch/arm/kernel/perf_event_v7.c b/arch/arm/kernel/perf_event_v7.c
index a4fb0f8..b7be2a3 100644
--- a/arch/arm/kernel/perf_event_v7.c
+++ b/arch/arm/kernel/perf_event_v7.c
@@ -736,10 +736,22 @@ static inline int armv7_pmnc_counter_has_overflowed(u32 pmnc, int idx)
 	return pmnc & BIT(ARMV7_IDX_TO_COUNTER(idx));
 }

-static inline void armv7_pmnc_select_counter(int idx)
+static inline u32 armv7_pmsel_read(void)
+{
+	u32 pmsel;
+
+	asm volatile("mrc p15, 0, %0, c9, c12, 5" : "=&r" (pmsel));
+	return pmsel;
+}
+
+static inline void armv7_pmsel_write(u32 counter)
 {
-	u32 counter = ARMV7_IDX_TO_COUNTER(idx);
 	asm volatile("mcr p15, 0, %0, c9, c12, 5" : : "r" (counter));
+}
+
+static inline void armv7_pmnc_select_counter(int idx)
+{
+	armv7_pmsel_write(ARMV7_IDX_TO_COUNTER(idx));
 	isb();
 }

@@ -952,8 +964,15 @@ static irqreturn_t armv7pmu_handle_irq(struct arm_pmu *cpu_pmu)
 	struct perf_sample_data data;
 	struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events);
 	struct pt_regs *regs;
+	u32 pmsel;
 	int idx;

+
+	/*
+	 * Save pmsel in case the interrupted context was using it.
+	 */
+	pmsel = armv7_pmsel_read();
+
 	/*
 	 * Get and reset the IRQ flags
 	 */
@@ -1004,6 +1023,8 @@ static irqreturn_t armv7pmu_handle_irq(struct arm_pmu *cpu_pmu)
 	 */
 	irq_work_run();

+	armv7_pmsel_write(pmsel);
+
 	return IRQ_HANDLED;
 }

--
1.9.1

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 4/9] arm: perf: Remove Remove PMU locking
  2019-07-17  8:17 [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt Julien Thierry
                   ` (2 preceding siblings ...)
  2019-07-17  8:17 ` [PATCH v4 3/9] arm: perf: save/resore pmsel Julien Thierry
@ 2019-07-17  8:17 ` Julien Thierry
  2019-08-01 13:06   ` Will Deacon
  2019-07-17  8:17 ` [PATCH v4 5/9] perf/arm_pmu: Move PMU lock to ARMv6 events Julien Thierry
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Julien Thierry @ 2019-07-17  8:17 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, Julien Thierry, peterz, jolsa, will.deacon,
	Russell King, acme, alexander.shishkin, mingo, namhyung,
	sthotton, liwei391

Since the PMU interrupt saves and restores the value of the selector
register, there is no need to serialize register accesses against the
interrupt contexts.

For operations that can be called with interrupts enabled, preemption
still needs to be disabled to ensure the programming of the PMU is
done on the expected CPU and not migrated mid-programming.

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Russell King <linux@armlinux.org.uk>
---
 arch/arm/kernel/perf_event_v7.c | 54 ++---------------------------------------
 1 file changed, 2 insertions(+), 52 deletions(-)

diff --git a/arch/arm/kernel/perf_event_v7.c b/arch/arm/kernel/perf_event_v7.c
index b7be2a3..9655127 100644
--- a/arch/arm/kernel/perf_event_v7.c
+++ b/arch/arm/kernel/perf_event_v7.c
@@ -882,10 +882,8 @@ static void armv7_pmnc_dump_regs(struct arm_pmu *cpu_pmu)

 static void armv7pmu_enable_event(struct perf_event *event)
 {
-	unsigned long flags;
 	struct hw_perf_event *hwc = &event->hw;
 	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
 	int idx = hwc->idx;

 	if (!armv7_pmnc_counter_valid(cpu_pmu, idx)) {
@@ -898,7 +896,6 @@ static void armv7pmu_enable_event(struct perf_event *event)
 	 * Enable counter and interrupt, and set the counter to count
 	 * the event that we're interested in.
 	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);

 	/*
 	 * Disable counter
@@ -922,16 +919,12 @@ static void armv7pmu_enable_event(struct perf_event *event)
 	 * Enable counter
 	 */
 	armv7_pmnc_enable_counter(idx);
-
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }

 static void armv7pmu_disable_event(struct perf_event *event)
 {
-	unsigned long flags;
 	struct hw_perf_event *hwc = &event->hw;
 	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
 	int idx = hwc->idx;

 	if (!armv7_pmnc_counter_valid(cpu_pmu, idx)) {
@@ -941,11 +934,6 @@ static void armv7pmu_disable_event(struct perf_event *event)
 	}

 	/*
-	 * Disable counter and interrupt
-	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
-
-	/*
 	 * Disable counter
 	 */
 	armv7_pmnc_disable_counter(idx);
@@ -954,8 +942,6 @@ static void armv7pmu_disable_event(struct perf_event *event)
 	 * Disable interrupt for this counter
 	 */
 	armv7_pmnc_disable_intens(idx);
-
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }

 static irqreturn_t armv7pmu_handle_irq(struct arm_pmu *cpu_pmu)
@@ -1030,24 +1016,16 @@ static irqreturn_t armv7pmu_handle_irq(struct arm_pmu *cpu_pmu)

 static void armv7pmu_start(struct arm_pmu *cpu_pmu)
 {
-	unsigned long flags;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+	WARN_ON_ONCE(preemptible());
 	/* Enable all counters */
 	armv7_pmnc_write(armv7_pmnc_read() | ARMV7_PMNC_E);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }

 static void armv7pmu_stop(struct arm_pmu *cpu_pmu)
 {
-	unsigned long flags;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+	WARN_ON_ONCE(preemptible());
 	/* Disable all counters */
 	armv7_pmnc_write(armv7_pmnc_read() & ~ARMV7_PMNC_E);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }

 static int armv7pmu_get_event_idx(struct pmu_hw_events *cpuc,
@@ -1513,14 +1491,8 @@ static void krait_clearpmu(u32 config_base)

 static void krait_pmu_disable_event(struct perf_event *event)
 {
-	unsigned long flags;
 	struct hw_perf_event *hwc = &event->hw;
 	int idx = hwc->idx;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	/* Disable counter and interrupt */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);

 	/* Disable counter */
 	armv7_pmnc_disable_counter(idx);
@@ -1533,23 +1505,17 @@ static void krait_pmu_disable_event(struct perf_event *event)

 	/* Disable interrupt for this counter */
 	armv7_pmnc_disable_intens(idx);
-
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }

 static void krait_pmu_enable_event(struct perf_event *event)
 {
-	unsigned long flags;
 	struct hw_perf_event *hwc = &event->hw;
 	int idx = hwc->idx;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);

 	/*
 	 * Enable counter and interrupt, and set the counter to count
 	 * the event that we're interested in.
 	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);

 	/* Disable counter */
 	armv7_pmnc_disable_counter(idx);
@@ -1569,8 +1535,6 @@ static void krait_pmu_enable_event(struct perf_event *event)

 	/* Enable counter */
 	armv7_pmnc_enable_counter(idx);
-
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }

 static void krait_pmu_reset(void *info)
@@ -1846,14 +1810,8 @@ static void scorpion_clearpmu(u32 config_base)

 static void scorpion_pmu_disable_event(struct perf_event *event)
 {
-	unsigned long flags;
 	struct hw_perf_event *hwc = &event->hw;
 	int idx = hwc->idx;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	/* Disable counter and interrupt */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);

 	/* Disable counter */
 	armv7_pmnc_disable_counter(idx);
@@ -1866,23 +1824,17 @@ static void scorpion_pmu_disable_event(struct perf_event *event)

 	/* Disable interrupt for this counter */
 	armv7_pmnc_disable_intens(idx);
-
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }

 static void scorpion_pmu_enable_event(struct perf_event *event)
 {
-	unsigned long flags;
 	struct hw_perf_event *hwc = &event->hw;
 	int idx = hwc->idx;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);

 	/*
 	 * Enable counter and interrupt, and set the counter to count
 	 * the event that we're interested in.
 	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);

 	/* Disable counter */
 	armv7_pmnc_disable_counter(idx);
@@ -1902,8 +1854,6 @@ static void scorpion_pmu_enable_event(struct perf_event *event)

 	/* Enable counter */
 	armv7_pmnc_enable_counter(idx);
-
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }

 static void scorpion_pmu_reset(void *info)
--
1.9.1

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 5/9] perf/arm_pmu: Move PMU lock to ARMv6 events
  2019-07-17  8:17 [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt Julien Thierry
                   ` (3 preceding siblings ...)
  2019-07-17  8:17 ` [PATCH v4 4/9] arm: perf: Remove Remove PMU locking Julien Thierry
@ 2019-07-17  8:17 ` Julien Thierry
  2019-07-17  8:17 ` [PATCH v4 6/9] arm64: perf: Do not call irq_work_run in NMI context Julien Thierry
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 24+ messages in thread
From: Julien Thierry @ 2019-07-17  8:17 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, Julien Thierry, peterz, jolsa, will.deacon,
	Russell King, acme, alexander.shishkin, mingo, namhyung,
	sthotton, liwei391

Perf event backend for ARMv8 and ARMv7 no longer uses the pmu_lock.
The only remaining user is the ARMv6 event backend.

Move the pmu_lock out of the generic arm_pmu driver into the ARMv6 code.

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Russell King <linux@armlinux.org.uk>
---
 arch/arm/kernel/perf_event_v6.c | 43 ++++++++++++++++++++++++-----------------
 drivers/perf/arm_pmu.c          |  1 -
 include/linux/perf/arm_pmu.h    |  5 -----
 3 files changed, 25 insertions(+), 24 deletions(-)

diff --git a/arch/arm/kernel/perf_event_v6.c b/arch/arm/kernel/perf_event_v6.c
index 1ae99de..2af0d4a 100644
--- a/arch/arm/kernel/perf_event_v6.c
+++ b/arch/arm/kernel/perf_event_v6.c
@@ -69,6 +69,12 @@ enum armv6_counters {
 };

 /*
+ * Hardware lock to serialize accesses to PMU registers. Needed for the
+ * read/modify/write sequences.
+ */
+DEFINE_PER_CPU(raw_spinlock_t, pmu_lock);
+
+/*
  * The hardware events that we support. We do support cache operations but
  * we have harvard caches and no way to combine instruction and data
  * accesses/misses in hardware.
@@ -269,9 +275,8 @@ static inline void armv6pmu_write_counter(struct perf_event *event, u64 value)
 static void armv6pmu_enable_event(struct perf_event *event)
 {
 	unsigned long val, mask, evt, flags;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
 	struct hw_perf_event *hwc = &event->hw;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
+	raw_spinlock_t *lock = this_cpu_ptr(&pmu_lock);
 	int idx = hwc->idx;

 	if (ARMV6_CYCLE_COUNTER == idx) {
@@ -294,12 +299,12 @@ static void armv6pmu_enable_event(struct perf_event *event)
 	 * Mask out the current event and set the counter to count the event
 	 * that we're interested in.
 	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+	raw_spin_lock_irqsave(lock, flags);
 	val = armv6_pmcr_read();
 	val &= ~mask;
 	val |= evt;
 	armv6_pmcr_write(val);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+	raw_spin_unlock_irqrestore(lock, flags);
 }

 static irqreturn_t
@@ -363,25 +368,25 @@ static void armv6pmu_enable_event(struct perf_event *event)
 static void armv6pmu_start(struct arm_pmu *cpu_pmu)
 {
 	unsigned long flags, val;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
+	raw_spinlock_t *lock = this_cpu_ptr(&pmu_lock);

-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+	raw_spin_lock_irqsave(lock, flags);
 	val = armv6_pmcr_read();
 	val |= ARMV6_PMCR_ENABLE;
 	armv6_pmcr_write(val);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+	raw_spin_unlock_irqrestore(lock, flags);
 }

 static void armv6pmu_stop(struct arm_pmu *cpu_pmu)
 {
 	unsigned long flags, val;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
+	raw_spinlock_t *lock = this_cpu_ptr(&pmu_lock);

-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+	raw_spin_lock_irqsave(lock, flags);
 	val = armv6_pmcr_read();
 	val &= ~ARMV6_PMCR_ENABLE;
 	armv6_pmcr_write(val);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+	raw_spin_unlock_irqrestore(lock, flags);
 }

 static int
@@ -420,9 +425,8 @@ static void armv6pmu_clear_event_idx(struct pmu_hw_events *cpuc,
 static void armv6pmu_disable_event(struct perf_event *event)
 {
 	unsigned long val, mask, evt, flags;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
 	struct hw_perf_event *hwc = &event->hw;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
+	raw_spinlock_t *lock = this_cpu_ptr(&pmu_lock);
 	int idx = hwc->idx;

 	if (ARMV6_CYCLE_COUNTER == idx) {
@@ -444,20 +448,19 @@ static void armv6pmu_disable_event(struct perf_event *event)
 	 * of ETM bus signal assertion cycles. The external reporting should
 	 * be disabled and so this should never increment.
 	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+	raw_spin_lock_irqsave(lock, flags);
 	val = armv6_pmcr_read();
 	val &= ~mask;
 	val |= evt;
 	armv6_pmcr_write(val);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+	raw_spin_unlock_irqrestore(lock, flags);
 }

 static void armv6mpcore_pmu_disable_event(struct perf_event *event)
 {
 	unsigned long val, mask, flags, evt = 0;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
 	struct hw_perf_event *hwc = &event->hw;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
+	raw_spinlock_t *lock = this_cpu_ptr(&pmu_lock);
 	int idx = hwc->idx;

 	if (ARMV6_CYCLE_COUNTER == idx) {
@@ -475,12 +478,12 @@ static void armv6mpcore_pmu_disable_event(struct perf_event *event)
 	 * Unlike UP ARMv6, we don't have a way of stopping the counters. We
 	 * simply disable the interrupt reporting.
 	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+	raw_spin_lock_irqsave(lock, flags);
 	val = armv6_pmcr_read();
 	val &= ~mask;
 	val |= evt;
 	armv6_pmcr_write(val);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+	raw_spin_unlock_irqrestore(lock, flags);
 }

 static int armv6_map_event(struct perf_event *event)
@@ -502,6 +505,8 @@ static void armv6pmu_init(struct arm_pmu *cpu_pmu)
 	cpu_pmu->stop		= armv6pmu_stop;
 	cpu_pmu->map_event	= armv6_map_event;
 	cpu_pmu->num_events	= 3;
+
+	raw_spin_lock_init(this_cpu_ptr(&pmu_lock));
 }

 static int armv6_1136_pmu_init(struct arm_pmu *cpu_pmu)
@@ -554,6 +559,8 @@ static int armv6mpcore_pmu_init(struct arm_pmu *cpu_pmu)
 	cpu_pmu->map_event	= armv6mpcore_map_event;
 	cpu_pmu->num_events	= 3;

+	raw_spin_lock_init(this_cpu_ptr(&pmu_lock));
+
 	return 0;
 }

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 2d06b80..7fd9f15 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -823,7 +823,6 @@ static struct arm_pmu *__armpmu_alloc(gfp_t flags)
 		struct pmu_hw_events *events;

 		events = per_cpu_ptr(pmu->hw_events, cpu);
-		raw_spin_lock_init(&events->pmu_lock);
 		events->percpu_pmu = pmu;
 	}

diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 71f525a..8640b23 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -54,11 +54,6 @@ struct pmu_hw_events {
 	 */
 	DECLARE_BITMAP(used_mask, ARMPMU_MAX_HWEVENTS);

-	/*
-	 * Hardware lock to serialize accesses to PMU registers. Needed for the
-	 * read/modify/write sequences.
-	 */
-	raw_spinlock_t		pmu_lock;

 	/*
 	 * When using percpu IRQs, we need a percpu dev_id. Place it here as we
--
1.9.1

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 6/9] arm64: perf: Do not call irq_work_run in NMI context
  2019-07-17  8:17 [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt Julien Thierry
                   ` (4 preceding siblings ...)
  2019-07-17  8:17 ` [PATCH v4 5/9] perf/arm_pmu: Move PMU lock to ARMv6 events Julien Thierry
@ 2019-07-17  8:17 ` Julien Thierry
  2019-08-01 13:06   ` Will Deacon
  2019-07-17  8:17 ` [PATCH v4 7/9] arm/arm64: kvm: pmu: Make overflow handler NMI safe Julien Thierry
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Julien Thierry @ 2019-07-17  8:17 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, Julien Thierry, peterz, jolsa, will.deacon, acme,
	alexander.shishkin, mingo, Catalin Marinas, namhyung, sthotton,
	liwei391

Function irq_work_run is not NMI safe and should not be called from NMI
context.

When PMU interrupt is an NMI do not call irq_work_run. Instead rely on the
IRQ work IPI to run the irq_work queue once NMI/IRQ contexts have been
exited.

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/kernel/perf_event.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 0e2cf5d..9c959ad 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -776,20 +776,16 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
 		if (!armpmu_event_set_period(event))
 			continue;
 
+		/*
+		 * Perf event overflow will queue the processing of the event as
+		 * an irq_work which will be taken care of in the handling of
+		 * IPI_IRQ_WORK.
+		 */
 		if (perf_event_overflow(event, &data, regs))
 			cpu_pmu->disable(event);
 	}
 	armv8pmu_start(cpu_pmu);
 
-	/*
-	 * Handle the pending perf events.
-	 *
-	 * Note: this call *must* be run with interrupts disabled. For
-	 * platforms that can have the PMU interrupts raised as an NMI, this
-	 * will not work.
-	 */
-	irq_work_run();
-
 	return IRQ_HANDLED;
 }
 
-- 
1.9.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 7/9] arm/arm64: kvm: pmu: Make overflow handler NMI safe
  2019-07-17  8:17 [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt Julien Thierry
                   ` (5 preceding siblings ...)
  2019-07-17  8:17 ` [PATCH v4 6/9] arm64: perf: Do not call irq_work_run in NMI context Julien Thierry
@ 2019-07-17  8:17 ` Julien Thierry
  2019-07-17  8:17 ` [PATCH v4 8/9] arm_pmu: Introduce pmu_irq_ops Julien Thierry
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 24+ messages in thread
From: Julien Thierry @ 2019-07-17  8:17 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, Julien Thierry, peterz, jolsa, Suzuki K Pouloze,
	will.deacon, Christoffer Dall, acme, alexander.shishkin, mingo,
	James Morse, Marc Zyngier, namhyung, sthotton, liwei391, kvmarm

When using an NMI for the PMU interrupt, taking any lock might cause a
deadlock. The current PMU overflow handler in KVM takes locks when
trying to wake up a vcpu.

When overflow handler is called by an NMI, defer the vcpu waking in an
irq_work queue.

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Pouloze <suzuki.poulose@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 include/kvm/arm_pmu.h |  1 +
 virt/kvm/arm/pmu.c    | 25 ++++++++++++++++++++++++-
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 16c769a..8202ed7 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -27,6 +27,7 @@ struct kvm_pmu {
 	bool ready;
 	bool created;
 	bool irq_level;
+	struct irq_work overflow_work;
 };

 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 3dd8238..deed8fb 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -421,6 +421,22 @@ void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
 }

 /**
+ * When perf interrupt is an NMI, we cannot safely notify the vcpu corresponding
+ * to the event.
+ * This is why we need a callback to do it once outside of the NMI context.
+ */
+static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work)
+{
+	struct kvm_vcpu *vcpu;
+	struct kvm_pmu *pmu;
+
+	pmu = container_of(work, struct kvm_pmu, overflow_work);
+	vcpu = kvm_pmc_to_vcpu(&pmu->pmc[0]);
+
+	kvm_vcpu_kick(vcpu);
+}
+
+/**
  * When the perf event overflows, set the overflow status and inform the vcpu.
  */
 static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
@@ -435,7 +451,11 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,

 	if (kvm_pmu_overflow_status(vcpu)) {
 		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
-		kvm_vcpu_kick(vcpu);
+
+		if (!in_nmi())
+			kvm_vcpu_kick(vcpu);
+		else
+			irq_work_queue(&vcpu->arch.pmu.overflow_work);
 	}
 }

@@ -706,6 +726,9 @@ static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu)
 			return ret;
 	}

+	init_irq_work(&vcpu->arch.pmu.overflow_work,
+		      kvm_pmu_perf_overflow_notify_vcpu);
+
 	vcpu->arch.pmu.created = true;
 	return 0;
 }
--
1.9.1

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 8/9] arm_pmu: Introduce pmu_irq_ops
  2019-07-17  8:17 [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt Julien Thierry
                   ` (6 preceding siblings ...)
  2019-07-17  8:17 ` [PATCH v4 7/9] arm/arm64: kvm: pmu: Make overflow handler NMI safe Julien Thierry
@ 2019-07-17  8:17 ` Julien Thierry
  2019-07-17  8:17 ` [PATCH v4 9/9] arm_pmu: Use NMIs for PMU Julien Thierry
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 24+ messages in thread
From: Julien Thierry @ 2019-07-17  8:17 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, Julien Thierry, peterz, jolsa, will.deacon, acme,
	alexander.shishkin, mingo, namhyung, sthotton, liwei391

Currently the PMU interrupt can either be a normal irq or a percpu irq.
Supporting NMI will introduce two cases for each existing one. It becomes
a mess of 'if's when managing the interrupt.

Define sets of callbacks for operations commonly done on the interrupt. The
appropriate set of callbacks is selected at interrupt request time and
simplifies interrupt enabling/disabling and freeing.

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Tested-by: Shijith Thotton <sthotton@marvell.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
---
 drivers/perf/arm_pmu.c | 86 ++++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 70 insertions(+), 16 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 7fd9f15..9ac072a 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -26,8 +26,46 @@

 #include <asm/irq_regs.h>

+static int armpmu_count_irq_users(const int irq);
+
+struct pmu_irq_ops {
+	void (*enable_pmuirq)(unsigned int irq);
+	void (*disable_pmuirq)(unsigned int irq);
+	void (*free_pmuirq)(unsigned int irq, int cpu, void __percpu *devid);
+};
+
+static void armpmu_free_pmuirq(unsigned int irq, int cpu, void __percpu *devid)
+{
+	free_irq(irq, per_cpu_ptr(devid, cpu));
+}
+
+static const struct pmu_irq_ops pmuirq_ops = {
+	.enable_pmuirq = enable_irq,
+	.disable_pmuirq = disable_irq_nosync,
+	.free_pmuirq = armpmu_free_pmuirq
+};
+
+static void armpmu_enable_percpu_pmuirq(unsigned int irq)
+{
+	enable_percpu_irq(irq, IRQ_TYPE_NONE);
+}
+
+static void armpmu_free_percpu_pmuirq(unsigned int irq, int cpu,
+				   void __percpu *devid)
+{
+	if (armpmu_count_irq_users(irq) == 1)
+		free_percpu_irq(irq, devid);
+}
+
+static const struct pmu_irq_ops percpu_pmuirq_ops = {
+	.enable_pmuirq = armpmu_enable_percpu_pmuirq,
+	.disable_pmuirq = disable_percpu_irq,
+	.free_pmuirq = armpmu_free_percpu_pmuirq
+};
+
 static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
 static DEFINE_PER_CPU(int, cpu_irq);
+static DEFINE_PER_CPU(const struct pmu_irq_ops *, cpu_irq_ops);

 static inline u64 arm_pmu_event_max_period(struct perf_event *event)
 {
@@ -544,6 +582,19 @@ static int armpmu_count_irq_users(const int irq)
 	return count;
 }

+static const struct pmu_irq_ops *armpmu_find_irq_ops(int irq)
+{
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		if (per_cpu(cpu_irq, cpu) == irq
+		    && per_cpu(cpu_irq_ops, cpu))
+			return per_cpu(cpu_irq_ops, cpu);
+	}
+
+	return NULL;
+}
+
 void armpmu_free_irq(int irq, int cpu)
 {
 	if (per_cpu(cpu_irq, cpu) == 0)
@@ -551,18 +602,18 @@ void armpmu_free_irq(int irq, int cpu)
 	if (WARN_ON(irq != per_cpu(cpu_irq, cpu)))
 		return;

-	if (!irq_is_percpu_devid(irq))
-		free_irq(irq, per_cpu_ptr(&cpu_armpmu, cpu));
-	else if (armpmu_count_irq_users(irq) == 1)
-		free_percpu_irq(irq, &cpu_armpmu);
+	per_cpu(cpu_irq_ops, cpu)->free_pmuirq(irq, cpu, &cpu_armpmu);

 	per_cpu(cpu_irq, cpu) = 0;
+	per_cpu(cpu_irq_ops, cpu) = NULL;
 }

 int armpmu_request_irq(int irq, int cpu)
 {
 	int err = 0;
 	const irq_handler_t handler = armpmu_dispatch_irq;
+	const struct pmu_irq_ops *irq_ops;
+
 	if (!irq)
 		return 0;

@@ -584,15 +635,26 @@ int armpmu_request_irq(int irq, int cpu)
 		irq_set_status_flags(irq, IRQ_NOAUTOEN);
 		err = request_irq(irq, handler, irq_flags, "arm-pmu",
 				  per_cpu_ptr(&cpu_armpmu, cpu));
+
+		irq_ops = &pmuirq_ops;
 	} else if (armpmu_count_irq_users(irq) == 0) {
 		err = request_percpu_irq(irq, handler, "arm-pmu",
 					 &cpu_armpmu);
+
+		irq_ops = &percpu_pmuirq_ops;
+	} else {
+		/* Per cpudevid irq was already requested by another CPU */
+		irq_ops = armpmu_find_irq_ops(irq);
+
+		if (WARN_ON(!irq_ops))
+			err = -EINVAL;
 	}

 	if (err)
 		goto err_out;

 	per_cpu(cpu_irq, cpu) = irq;
+	per_cpu(cpu_irq_ops, cpu) = irq_ops;
 	return 0;

 err_out:
@@ -625,12 +687,8 @@ static int arm_perf_starting_cpu(unsigned int cpu, struct hlist_node *node)
 	per_cpu(cpu_armpmu, cpu) = pmu;

 	irq = armpmu_get_cpu_irq(pmu, cpu);
-	if (irq) {
-		if (irq_is_percpu_devid(irq))
-			enable_percpu_irq(irq, IRQ_TYPE_NONE);
-		else
-			enable_irq(irq);
-	}
+	if (irq)
+		per_cpu(cpu_irq_ops, cpu)->enable_pmuirq(irq);

 	return 0;
 }
@@ -644,12 +702,8 @@ static int arm_perf_teardown_cpu(unsigned int cpu, struct hlist_node *node)
 		return 0;

 	irq = armpmu_get_cpu_irq(pmu, cpu);
-	if (irq) {
-		if (irq_is_percpu_devid(irq))
-			disable_percpu_irq(irq);
-		else
-			disable_irq_nosync(irq);
-	}
+	if (irq)
+		per_cpu(cpu_irq_ops, cpu)->disable_pmuirq(irq);

 	per_cpu(cpu_armpmu, cpu) = NULL;

--
1.9.1

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 9/9] arm_pmu: Use NMIs for PMU
  2019-07-17  8:17 [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt Julien Thierry
                   ` (7 preceding siblings ...)
  2019-07-17  8:17 ` [PATCH v4 8/9] arm_pmu: Introduce pmu_irq_ops Julien Thierry
@ 2019-07-17  8:17 ` Julien Thierry
  2019-07-30  9:11   ` Russell King - ARM Linux admin
  2019-07-17  9:02 ` [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt Julien Thierry
  2019-07-30  9:05 ` Julien Thierry
  10 siblings, 1 reply; 24+ messages in thread
From: Julien Thierry @ 2019-07-17  8:17 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, Julien Thierry, peterz, jolsa, will.deacon, acme,
	alexander.shishkin, mingo, namhyung, sthotton, liwei391

Add required PMU interrupt operations for NMIs. Request interrupt lines as
NMIs when possible, otherwise fall back to normal interrupts.

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Tested-by: Shijith Thotton <sthotton@marvell.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
---
 drivers/perf/arm_pmu.c | 62 +++++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 56 insertions(+), 6 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 9ac072a..a9c00cd 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -45,6 +45,17 @@ static void armpmu_free_pmuirq(unsigned int irq, int cpu, void __percpu *devid)
 	.free_pmuirq = armpmu_free_pmuirq
 };

+static void armpmu_free_pmunmi(unsigned int irq, int cpu, void __percpu *devid)
+{
+	free_nmi(irq, per_cpu_ptr(devid, cpu));
+}
+
+static const struct pmu_irq_ops pmunmi_ops = {
+	.enable_pmuirq = enable_nmi,
+	.disable_pmuirq = disable_nmi_nosync,
+	.free_pmuirq = armpmu_free_pmunmi
+};
+
 static void armpmu_enable_percpu_pmuirq(unsigned int irq)
 {
 	enable_percpu_irq(irq, IRQ_TYPE_NONE);
@@ -63,6 +74,31 @@ static void armpmu_free_percpu_pmuirq(unsigned int irq, int cpu,
 	.free_pmuirq = armpmu_free_percpu_pmuirq
 };

+static void armpmu_enable_percpu_pmunmi(unsigned int irq)
+{
+	if (!prepare_percpu_nmi(irq))
+		enable_percpu_nmi(irq, IRQ_TYPE_NONE);
+}
+
+static void armpmu_disable_percpu_pmunmi(unsigned int irq)
+{
+	disable_percpu_nmi(irq);
+	teardown_percpu_nmi(irq);
+}
+
+static void armpmu_free_percpu_pmunmi(unsigned int irq, int cpu,
+				      void __percpu *devid)
+{
+	if (armpmu_count_irq_users(irq) == 1)
+		free_percpu_nmi(irq, devid);
+}
+
+static const struct pmu_irq_ops percpu_pmunmi_ops = {
+	.enable_pmuirq = armpmu_enable_percpu_pmunmi,
+	.disable_pmuirq = armpmu_disable_percpu_pmunmi,
+	.free_pmuirq = armpmu_free_percpu_pmunmi
+};
+
 static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
 static DEFINE_PER_CPU(int, cpu_irq);
 static DEFINE_PER_CPU(const struct pmu_irq_ops *, cpu_irq_ops);
@@ -633,15 +669,29 @@ int armpmu_request_irq(int irq, int cpu)
 			    IRQF_NO_THREAD;

 		irq_set_status_flags(irq, IRQ_NOAUTOEN);
-		err = request_irq(irq, handler, irq_flags, "arm-pmu",
+
+		err = request_nmi(irq, handler, irq_flags, "arm-pmu",
 				  per_cpu_ptr(&cpu_armpmu, cpu));

-		irq_ops = &pmuirq_ops;
+		/* If cannot get an NMI, get a normal interrupt */
+		if (err) {
+			err = request_irq(irq, handler, irq_flags, "arm-pmu",
+					  per_cpu_ptr(&cpu_armpmu, cpu));
+			irq_ops = &pmuirq_ops;
+		} else {
+			irq_ops = &pmunmi_ops;
+		}
 	} else if (armpmu_count_irq_users(irq) == 0) {
-		err = request_percpu_irq(irq, handler, "arm-pmu",
-					 &cpu_armpmu);
-
-		irq_ops = &percpu_pmuirq_ops;
+		err = request_percpu_nmi(irq, handler, "arm-pmu", &cpu_armpmu);
+
+		/* If cannot get an NMI, get a normal interrupt */
+		if (err) {
+			err = request_percpu_irq(irq, handler, "arm-pmu",
+						 &cpu_armpmu);
+			irq_ops = &percpu_pmuirq_ops;
+		} else {
+			irq_ops = &percpu_pmunmi_ops;
+		}
 	} else {
 		/* Per cpudevid irq was already requested by another CPU */
 		irq_ops = armpmu_find_irq_ops(irq);
--
1.9.1

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt
  2019-07-17  8:17 [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt Julien Thierry
                   ` (8 preceding siblings ...)
  2019-07-17  8:17 ` [PATCH v4 9/9] arm_pmu: Use NMIs for PMU Julien Thierry
@ 2019-07-17  9:02 ` Julien Thierry
  2019-07-30  9:05 ` Julien Thierry
  10 siblings, 0 replies; 24+ messages in thread
From: Julien Thierry @ 2019-07-17  9:02 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, peterz, jolsa, Will Deacon, acme,
	alexander.shishkin, mingo, namhyung, sthotton, liwei391



On 17/07/2019 09:17, Julien Thierry wrote:
> Hi,
>
> After fixing the arm64 Pseudo-NMIs, I'm dusting off this series.
>
> The series makes the arm_pmu driver use NMIs for the perf interrupt when
> NMIs are available on the platform (currently, only arm64 + GICv3).
>
> * Patches 1 to 4 remove the need to use spinlocks for the Arm PMU
>   driver for Armv7 and Armv8 (aarch64).
> * Patches 5 moves the locking to Armv6 specific code which is the sole
>   user
> * Patches 6 and 7 make the PMU interrupt handler NMI-safe
> * Patches 8 and 9 enable using pseudo-NMI for the PMU interrupt when
>   the feature is available
>
> Changes since v3[3]:
> - Added tags
> - Fix build issue for perf_event_v6
> - Don't disable preemption in pmu->enable()
> - Always rely on IPI_IRQ_WORK to run the queued work
> - Fixed typos + cleanups
>
> Changes since v2[2]:
> - Rebased on recent linux-next (next-20190708)
> - Fixed a number of bugs with indices (reported by Wei)
> - Minor style fixes
>
> Changes since v1[3]:
> - Rebased on v5.1-rc1
> - Pseudo-NMI has changed a lot since then, use the (now merged) NMI API
> - Remove locking from armv7 perf_event
> - Use locking only in armv6 perf_event
> - Use direct counter/type registers insted of selector register for armv8
>
> [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2019-March/640536.html
> [2] http://lists.infradead.org/pipermail/linux-arm-kernel/2018-January/554611.html
>

Huh, I forgot to update the links, sorry:

[1]
http://lists.infradead.org/pipermail/linux-arm-kernel/2019-July/665339.html
[2]
http://lists.infradead.org/pipermail/linux-arm-kernel/2019-March/640536.html
[3]
http://lists.infradead.org/pipermail/linux-arm-kernel/2018-January/554611.html

Cheers,

--
Julien Thierry
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt
  2019-07-17  8:17 [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt Julien Thierry
                   ` (9 preceding siblings ...)
  2019-07-17  9:02 ` [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt Julien Thierry
@ 2019-07-30  9:05 ` Julien Thierry
  10 siblings, 0 replies; 24+ messages in thread
From: Julien Thierry @ 2019-07-30  9:05 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, peterz, jolsa, will.deacon, acme,
	alexander.shishkin, mingo, namhyung, sthotton, liwei391,
	julien.thierry.kdev

Hi,

Just a gentle ping on this series.

Cheers,

Julien

On 17/07/2019 09:17, Julien Thierry wrote:
> Hi,
> 
> After fixing the arm64 Pseudo-NMIs, I'm dusting off this series.
> 
> The series makes the arm_pmu driver use NMIs for the perf interrupt when
> NMIs are available on the platform (currently, only arm64 + GICv3).
> 
> * Patches 1 to 4 remove the need to use spinlocks for the Arm PMU
>   driver for Armv7 and Armv8 (aarch64).
> * Patches 5 moves the locking to Armv6 specific code which is the sole
>   user
> * Patches 6 and 7 make the PMU interrupt handler NMI-safe
> * Patches 8 and 9 enable using pseudo-NMI for the PMU interrupt when
>   the feature is available
> 
> Changes since v3[3]:
> - Added tags
> - Fix build issue for perf_event_v6
> - Don't disable preemption in pmu->enable()
> - Always rely on IPI_IRQ_WORK to run the queued work
> - Fixed typos + cleanups
> 
> Changes since v2[2]:
> - Rebased on recent linux-next (next-20190708)
> - Fixed a number of bugs with indices (reported by Wei)
> - Minor style fixes
> 
> Changes since v1[3]:
> - Rebased on v5.1-rc1
> - Pseudo-NMI has changed a lot since then, use the (now merged) NMI API
> - Remove locking from armv7 perf_event
> - Use locking only in armv6 perf_event
> - Use direct counter/type registers insted of selector register for armv8
> 
> [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2019-March/640536.html
> [2] http://lists.infradead.org/pipermail/linux-arm-kernel/2018-January/554611.html
> 
> Cheers,
> 
> Julien
> 
> -->
> 
> Julien Thierry (8):
>   arm64: perf: Remove PMU locking
>   arm: perf: save/resore pmsel
>   arm: perf: Remove Remove PMU locking
>   perf/arm_pmu: Move PMU lock to ARMv6 events
>   arm64: perf: Do not call irq_work_run in NMI context
>   arm/arm64: kvm: pmu: Make overflow handler NMI safe
>   arm_pmu: Introduce pmu_irq_ops
>   arm_pmu: Use NMIs for PMU
> 
> Mark Rutland (1):
>   arm64: perf: avoid PMXEV* indirection
> 
>  arch/arm/kernel/perf_event_v6.c |  43 +++++++-----
>  arch/arm/kernel/perf_event_v7.c |  79 +++++++---------------
>  arch/arm64/kernel/perf_event.c  | 136 ++++++++++++++++++++++++--------------
>  drivers/perf/arm_pmu.c          | 143 ++++++++++++++++++++++++++++++++++------
>  include/kvm/arm_pmu.h           |   1 +
>  include/linux/perf/arm_pmu.h    |   5 --
>  virt/kvm/arm/pmu.c              |  25 ++++++-
>  7 files changed, 284 insertions(+), 148 deletions(-)
> 
> --
> 1.9.1
> 

-- 
Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 9/9] arm_pmu: Use NMIs for PMU
  2019-07-17  8:17 ` [PATCH v4 9/9] arm_pmu: Use NMIs for PMU Julien Thierry
@ 2019-07-30  9:11   ` Russell King - ARM Linux admin
  2019-07-30  9:18     ` Julien Thierry
  0 siblings, 1 reply; 24+ messages in thread
From: Russell King - ARM Linux admin @ 2019-07-30  9:11 UTC (permalink / raw)
  To: Julien Thierry
  Cc: mark.rutland, peterz, liwei391, will.deacon, acme,
	alexander.shishkin, mingo, namhyung, sthotton, jolsa,
	linux-arm-kernel

On Wed, Jul 17, 2019 at 09:17:12AM +0100, Julien Thierry wrote:
> Add required PMU interrupt operations for NMIs. Request interrupt lines as
> NMIs when possible, otherwise fall back to normal interrupts.
> 
> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
> Tested-by: Shijith Thotton <sthotton@marvell.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>

This has no effect on 32-bit ARM?

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 9/9] arm_pmu: Use NMIs for PMU
  2019-07-30  9:11   ` Russell King - ARM Linux admin
@ 2019-07-30  9:18     ` Julien Thierry
  2019-07-30  9:28       ` Russell King - ARM Linux admin
  0 siblings, 1 reply; 24+ messages in thread
From: Julien Thierry @ 2019-07-30  9:18 UTC (permalink / raw)
  To: Russell King - ARM Linux admin
  Cc: mark.rutland, peterz, liwei391, will.deacon, acme,
	alexander.shishkin, mingo, namhyung, sthotton, jolsa,
	linux-arm-kernel

Hi Russell,

On 30/07/2019 10:11, Russell King - ARM Linux admin wrote:
> On Wed, Jul 17, 2019 at 09:17:12AM +0100, Julien Thierry wrote:
>> Add required PMU interrupt operations for NMIs. Request interrupt lines as
>> NMIs when possible, otherwise fall back to normal interrupts.
>>
>> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
>> Tested-by: Shijith Thotton <sthotton@marvell.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
> 
> This has no effect on 32-bit ARM?
> 

It shouldn't. request_nmi()/request_percpu_nmi() should fail on a
platform that doesn't have the NMI (through IRQ framework) support .
Currently, only arm64 with GICv3 provides that support.

So the pmu driver should fallback to request_irq()/request_percpu_irq()
for a 32-bit ARM kernel platforms and work as before. I can clarify that
in the commit message if there is a respin (or if maintainers agree to
amend).

Thanks,

-- 
Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 9/9] arm_pmu: Use NMIs for PMU
  2019-07-30  9:18     ` Julien Thierry
@ 2019-07-30  9:28       ` Russell King - ARM Linux admin
  2019-07-30 14:06         ` Julien Thierry
  0 siblings, 1 reply; 24+ messages in thread
From: Russell King - ARM Linux admin @ 2019-07-30  9:28 UTC (permalink / raw)
  To: Julien Thierry
  Cc: mark.rutland, peterz, liwei391, will.deacon, acme,
	alexander.shishkin, mingo, namhyung, sthotton, jolsa,
	linux-arm-kernel

On Tue, Jul 30, 2019 at 10:18:16AM +0100, Julien Thierry wrote:
> Hi Russell,
> 
> On 30/07/2019 10:11, Russell King - ARM Linux admin wrote:
> > On Wed, Jul 17, 2019 at 09:17:12AM +0100, Julien Thierry wrote:
> >> Add required PMU interrupt operations for NMIs. Request interrupt lines as
> >> NMIs when possible, otherwise fall back to normal interrupts.
> >>
> >> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
> >> Tested-by: Shijith Thotton <sthotton@marvell.com>
> >> Cc: Will Deacon <will.deacon@arm.com>
> >> Cc: Mark Rutland <mark.rutland@arm.com>
> > 
> > This has no effect on 32-bit ARM?
> > 
> 
> It shouldn't. request_nmi()/request_percpu_nmi() should fail on a
> platform that doesn't have the NMI (through IRQ framework) support .
> Currently, only arm64 with GICv3 provides that support.
> 
> So the pmu driver should fallback to request_irq()/request_percpu_irq()
> for a 32-bit ARM kernel platforms and work as before. I can clarify that
> in the commit message if there is a respin (or if maintainers agree to
> amend).

Has it been tested with a 32-bit guest kernel running on ARM64?

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 9/9] arm_pmu: Use NMIs for PMU
  2019-07-30  9:28       ` Russell King - ARM Linux admin
@ 2019-07-30 14:06         ` Julien Thierry
  0 siblings, 0 replies; 24+ messages in thread
From: Julien Thierry @ 2019-07-30 14:06 UTC (permalink / raw)
  To: Russell King - ARM Linux admin
  Cc: mark.rutland, peterz, liwei391, will.deacon, acme,
	alexander.shishkin, mingo, namhyung, sthotton, jolsa,
	linux-arm-kernel



On 30/07/2019 10:28, Russell King - ARM Linux admin wrote:
> On Tue, Jul 30, 2019 at 10:18:16AM +0100, Julien Thierry wrote:
>> Hi Russell,
>>
>> On 30/07/2019 10:11, Russell King - ARM Linux admin wrote:
>>> On Wed, Jul 17, 2019 at 09:17:12AM +0100, Julien Thierry wrote:
>>>> Add required PMU interrupt operations for NMIs. Request interrupt lines as
>>>> NMIs when possible, otherwise fall back to normal interrupts.
>>>>
>>>> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
>>>> Tested-by: Shijith Thotton <sthotton@marvell.com>
>>>> Cc: Will Deacon <will.deacon@arm.com>
>>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>>
>>> This has no effect on 32-bit ARM?
>>>
>>
>> It shouldn't. request_nmi()/request_percpu_nmi() should fail on a
>> platform that doesn't have the NMI (through IRQ framework) support .
>> Currently, only arm64 with GICv3 provides that support.
>>
>> So the pmu driver should fallback to request_irq()/request_percpu_irq()
>> for a 32-bit ARM kernel platforms and work as before. I can clarify that
>> in the commit message if there is a respin (or if maintainers agree to
>> amend).
> 
> Has it been tested with a 32-bit guest kernel running on ARM64?

In theory, this shouldn't change anything. Even if the host has a PMU as
NMI, interrupts are presented to the guest the same way as before, not
in an host NMI context. And as long as the guest handles it's interrupts
the same way (i.e. doesn't use priorities and sticks to PSTATE/CPSR I
bit to block interrupts) things should behave like before.

I still gave this a try, just in case there would be other surprises.
Running (and profiling) a 32bit kvm guest from a host that is using
Pseudo-NMIs for the PMU interrupt works and using the virtual PMU from
within the guest also works.

Thanks,

-- 
Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 2/9] arm64: perf: Remove PMU locking
  2019-07-17  8:17 ` [PATCH v4 2/9] arm64: perf: Remove PMU locking Julien Thierry
@ 2019-08-01 12:58   ` Will Deacon
  2019-08-02 14:26     ` Julien Thierry
  0 siblings, 1 reply; 24+ messages in thread
From: Will Deacon @ 2019-08-01 12:58 UTC (permalink / raw)
  To: Julien Thierry
  Cc: mark.rutland, peterz, Catalin Marinas, will.deacon, acme,
	alexander.shishkin, mingo, namhyung, sthotton, jolsa,
	linux-arm-kernel, liwei391

On Wed, Jul 17, 2019 at 09:17:05AM +0100, Julien Thierry wrote:
> Since the PMU driver uses direct registers for counter
> setup/manipulation, locking around these operations is no longer needed.
> 
> For operations that can be called with interrupts enabled, preemption
> still needs to be disabled to ensure the programming of the PMU is
> done on the expected CPU and not migrated mid-programming.
> 
> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> ---
>  arch/arm64/kernel/perf_event.c | 30 ++----------------------------
>  1 file changed, 2 insertions(+), 28 deletions(-)
> 
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index 838758f..0e2cf5d 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -673,15 +673,10 @@ static inline u32 armv8pmu_getreset_flags(void)
> 
>  static void armv8pmu_enable_event(struct perf_event *event)
>  {
> -	unsigned long flags;
> -	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
> -	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
> -
>  	/*
>  	 * Enable counter and interrupt, and set the counter to count
>  	 * the event that we're interested in.
>  	 */
> -	raw_spin_lock_irqsave(&events->pmu_lock, flags);
> 
>  	/*
>  	 * Disable counter
> @@ -702,21 +697,10 @@ static void armv8pmu_enable_event(struct perf_event *event)
>  	 * Enable counter
>  	 */
>  	armv8pmu_enable_event_counter(event);
> -
> -	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
>  }

With the implicit ISBs now removed by virtue of addressing the counter
register directly, what prevents the programming of the evtype being
reordered with respect to disabling/enabling the counter?

>  static void armv8pmu_disable_event(struct perf_event *event)
>  {
> -	unsigned long flags;
> -	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
> -	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
> -
> -	/*
> -	 * Disable counter and interrupt
> -	 */
> -	raw_spin_lock_irqsave(&events->pmu_lock, flags);
> -
>  	/*
>  	 * Disable counter
>  	 */
> @@ -726,30 +710,20 @@ static void armv8pmu_disable_event(struct perf_event *event)
>  	 * Disable interrupt for this counter
>  	 */
>  	armv8pmu_disable_event_irq(event);
> -
> -	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
>  }
> 
>  static void armv8pmu_start(struct arm_pmu *cpu_pmu)
>  {
> -	unsigned long flags;
> -	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
> -
> -	raw_spin_lock_irqsave(&events->pmu_lock, flags);
> +	WARN_ON_ONCE(preemptible());
>  	/* Enable all counters */
>  	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
> -	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
>  }
> 
>  static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
>  {
> -	unsigned long flags;
> -	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
> -
> -	raw_spin_lock_irqsave(&events->pmu_lock, flags);
> +	WARN_ON_ONCE(preemptible());

I don't think we need these WARN_ONs -- these are very much per-CPU
operations from the context of the perf core, so we'll be ok.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 3/9] arm: perf: save/resore pmsel
  2019-07-17  8:17 ` [PATCH v4 3/9] arm: perf: save/resore pmsel Julien Thierry
@ 2019-08-01 13:01   ` Will Deacon
  2019-08-02 14:34     ` Julien Thierry
  0 siblings, 1 reply; 24+ messages in thread
From: Will Deacon @ 2019-08-01 13:01 UTC (permalink / raw)
  To: Julien Thierry
  Cc: mark.rutland, peterz, liwei391, will.deacon, Russell King, acme,
	alexander.shishkin, mingo, stable, namhyung, sthotton, jolsa,
	linux-arm-kernel

[typo in subject: resore ->restore]

On Wed, Jul 17, 2019 at 09:17:06AM +0100, Julien Thierry wrote:
> The callback pmu->read() can be called with interrupts enabled.
> Currently, on ARM, this can cause the following callchain:
> 
> armpmu_read() -> armpmu_event_update() -> armv7pmu_read_counter()

Why can't we just disable irqs in armv7pmu_read_counter() ?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 6/9] arm64: perf: Do not call irq_work_run in NMI context
  2019-07-17  8:17 ` [PATCH v4 6/9] arm64: perf: Do not call irq_work_run in NMI context Julien Thierry
@ 2019-08-01 13:06   ` Will Deacon
  2019-08-02 14:43     ` Julien Thierry
  0 siblings, 1 reply; 24+ messages in thread
From: Will Deacon @ 2019-08-01 13:06 UTC (permalink / raw)
  To: Julien Thierry
  Cc: mark.rutland, peterz, Catalin Marinas, will.deacon, acme,
	alexander.shishkin, mingo, namhyung, sthotton, jolsa,
	linux-arm-kernel, liwei391

On Wed, Jul 17, 2019 at 09:17:09AM +0100, Julien Thierry wrote:
> Function irq_work_run is not NMI safe and should not be called from NMI
> context.
> 
> When PMU interrupt is an NMI do not call irq_work_run. Instead rely on the
> IRQ work IPI to run the irq_work queue once NMI/IRQ contexts have been
> exited.
> 
> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> ---
>  arch/arm64/kernel/perf_event.c | 14 +++++---------
>  1 file changed, 5 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index 0e2cf5d..9c959ad 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -776,20 +776,16 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
>  		if (!armpmu_event_set_period(event))
>  			continue;
>  
> +		/*
> +		 * Perf event overflow will queue the processing of the event as
> +		 * an irq_work which will be taken care of in the handling of
> +		 * IPI_IRQ_WORK.
> +		 */
>  		if (perf_event_overflow(event, &data, regs))
>  			cpu_pmu->disable(event);
>  	}
>  	armv8pmu_start(cpu_pmu);
>  
> -	/*
> -	 * Handle the pending perf events.
> -	 *
> -	 * Note: this call *must* be run with interrupts disabled. For
> -	 * platforms that can have the PMU interrupts raised as an NMI, this
> -	 * will not work.
> -	 */
> -	irq_work_run();

What about the case where NMIs are not being used (e.g. GICv2)?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 4/9] arm: perf: Remove Remove PMU locking
  2019-07-17  8:17 ` [PATCH v4 4/9] arm: perf: Remove Remove PMU locking Julien Thierry
@ 2019-08-01 13:06   ` Will Deacon
  2019-08-02 14:36     ` Julien Thierry
  0 siblings, 1 reply; 24+ messages in thread
From: Will Deacon @ 2019-08-01 13:06 UTC (permalink / raw)
  To: Julien Thierry
  Cc: mark.rutland, peterz, liwei391, will.deacon, Russell King, acme,
	alexander.shishkin, mingo, namhyung, sthotton, jolsa,
	linux-arm-kernel

[extra 'Remove' in subject]

On Wed, Jul 17, 2019 at 09:17:07AM +0100, Julien Thierry wrote:
> Since the PMU interrupt saves and restores the value of the selector
> register, there is no need to serialize register accesses against the
> interrupt contexts.
> 
> For operations that can be called with interrupts enabled, preemption
> still needs to be disabled to ensure the programming of the PMU is
> done on the expected CPU and not migrated mid-programming.
> 
> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: Russell King <linux@armlinux.org.uk>
> ---
>  arch/arm/kernel/perf_event_v7.c | 54 ++---------------------------------------
>  1 file changed, 2 insertions(+), 52 deletions(-)

I'm struggling to see why this patch is needed or, if it is, why we're not
doing something similar for v6 and xscale.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 2/9] arm64: perf: Remove PMU locking
  2019-08-01 12:58   ` Will Deacon
@ 2019-08-02 14:26     ` Julien Thierry
  0 siblings, 0 replies; 24+ messages in thread
From: Julien Thierry @ 2019-08-02 14:26 UTC (permalink / raw)
  To: Will Deacon
  Cc: mark.rutland, peterz, Catalin Marinas, will.deacon, acme,
	alexander.shishkin, mingo, namhyung, sthotton, jolsa,
	linux-arm-kernel, liwei391

Hi Will,

On 01/08/2019 13:58, Will Deacon wrote:
> On Wed, Jul 17, 2019 at 09:17:05AM +0100, Julien Thierry wrote:
>> Since the PMU driver uses direct registers for counter
>> setup/manipulation, locking around these operations is no longer needed.
>>
>> For operations that can be called with interrupts enabled, preemption
>> still needs to be disabled to ensure the programming of the PMU is
>> done on the expected CPU and not migrated mid-programming.
>>
>> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
>> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
>> Cc: Jiri Olsa <jolsa@redhat.com>
>> Cc: Namhyung Kim <namhyung@kernel.org>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> ---
>>  arch/arm64/kernel/perf_event.c | 30 ++----------------------------
>>  1 file changed, 2 insertions(+), 28 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
>> index 838758f..0e2cf5d 100644
>> --- a/arch/arm64/kernel/perf_event.c
>> +++ b/arch/arm64/kernel/perf_event.c
>> @@ -673,15 +673,10 @@ static inline u32 armv8pmu_getreset_flags(void)
>>
>>  static void armv8pmu_enable_event(struct perf_event *event)
>>  {
>> -	unsigned long flags;
>> -	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
>> -	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
>> -
>>  	/*
>>  	 * Enable counter and interrupt, and set the counter to count
>>  	 * the event that we're interested in.
>>  	 */
>> -	raw_spin_lock_irqsave(&events->pmu_lock, flags);
>>
>>  	/*
>>  	 * Disable counter
>> @@ -702,21 +697,10 @@ static void armv8pmu_enable_event(struct perf_event *event)
>>  	 * Enable counter
>>  	 */
>>  	armv8pmu_enable_event_counter(event);
>> -
>> -	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
>>  }
> 
> With the implicit ISBs now removed by virtue of addressing the counter
> register directly, what prevents the programming of the evtype being
> reordered with respect to disabling/enabling the counter?

I agree, it seems an ISB is missing here. It should probably be fixed in
the previous patch.

However, I notice that even before that patch, there is no ISB between
the enabling of the IRQ for the counter and the enabling of the counter
itself.
Meaning we might start counting events before the IRQ is enabled.

Should we have something like the following?

        armv8pmu_disable_event_counter(event);
        isb();
        armv8pmu_write_event_type(event);
        armv8pmu_enable_event_irq(event);
        isb();
        armv8pmu_enable_event_counter(event);

> 
>>  static void armv8pmu_disable_event(struct perf_event *event)
>>  {
>> -	unsigned long flags;
>> -	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
>> -	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
>> -
>> -	/*
>> -	 * Disable counter and interrupt
>> -	 */
>> -	raw_spin_lock_irqsave(&events->pmu_lock, flags);
>> -
>>  	/*
>>  	 * Disable counter
>>  	 */
>> @@ -726,30 +710,20 @@ static void armv8pmu_disable_event(struct perf_event *event)
>>  	 * Disable interrupt for this counter
>>  	 */
>>  	armv8pmu_disable_event_irq(event);
>> -
>> -	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
>>  }
>>
>>  static void armv8pmu_start(struct arm_pmu *cpu_pmu)
>>  {
>> -	unsigned long flags;
>> -	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
>> -
>> -	raw_spin_lock_irqsave(&events->pmu_lock, flags);
>> +	WARN_ON_ONCE(preemptible());
>>  	/* Enable all counters */
>>  	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
>> -	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
>>  }
>>
>>  static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
>>  {
>> -	unsigned long flags;
>> -	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
>> -
>> -	raw_spin_lock_irqsave(&events->pmu_lock, flags);
>> +	WARN_ON_ONCE(preemptible());
> 
> I don't think we need these WARN_ONs -- these are very much per-CPU
> operations from the context of the perf core, so we'll be ok.
> 

If they are not necessary we can get rid of them.

Thanks,

-- 
Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 3/9] arm: perf: save/resore pmsel
  2019-08-01 13:01   ` Will Deacon
@ 2019-08-02 14:34     ` Julien Thierry
  0 siblings, 0 replies; 24+ messages in thread
From: Julien Thierry @ 2019-08-02 14:34 UTC (permalink / raw)
  To: Will Deacon
  Cc: mark.rutland, peterz, liwei391, will.deacon, Russell King, acme,
	alexander.shishkin, mingo, stable, namhyung, sthotton, jolsa,
	linux-arm-kernel



On 01/08/2019 14:01, Will Deacon wrote:
> [typo in subject: resore ->restore]
> 
> On Wed, Jul 17, 2019 at 09:17:06AM +0100, Julien Thierry wrote:
>> The callback pmu->read() can be called with interrupts enabled.
>> Currently, on ARM, this can cause the following callchain:
>>
>> armpmu_read() -> armpmu_event_update() -> armv7pmu_read_counter()
> 
> Why can't we just disable irqs in armv7pmu_read_counter() ?
> 

We could. But since we get rid of the lock after (otherwise it is the
only reason we have to keep the lock) we might as well find another
solution.

Thanks,

-- 
Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 4/9] arm: perf: Remove Remove PMU locking
  2019-08-01 13:06   ` Will Deacon
@ 2019-08-02 14:36     ` Julien Thierry
  0 siblings, 0 replies; 24+ messages in thread
From: Julien Thierry @ 2019-08-02 14:36 UTC (permalink / raw)
  To: Will Deacon
  Cc: mark.rutland, peterz, liwei391, will.deacon, Russell King, acme,
	alexander.shishkin, mingo, namhyung, sthotton, jolsa,
	linux-arm-kernel



On 01/08/2019 14:06, Will Deacon wrote:
> [extra 'Remove' in subject]
> 
> On Wed, Jul 17, 2019 at 09:17:07AM +0100, Julien Thierry wrote:
>> Since the PMU interrupt saves and restores the value of the selector
>> register, there is no need to serialize register accesses against the
>> interrupt contexts.
>>
>> For operations that can be called with interrupts enabled, preemption
>> still needs to be disabled to ensure the programming of the PMU is
>> done on the expected CPU and not migrated mid-programming.
>>
>> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
>> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
>> Cc: Jiri Olsa <jolsa@redhat.com>
>> Cc: Namhyung Kim <namhyung@kernel.org>
>> Cc: Russell King <linux@armlinux.org.uk>
>> ---
>>  arch/arm/kernel/perf_event_v7.c | 54 ++---------------------------------------
>>  1 file changed, 2 insertions(+), 52 deletions(-)
> 
> I'm struggling to see why this patch is needed or, if it is, why we're not
> doing something similar for v6 and xscale.

We can't do a similar things for v6 because it doesn't have set/clear
registers for event enabling/disabling. To do that on v6 we must do
read-modify-write, so we need to keep a spinlock.

For v7 we can do without it.

Thanks,

-- 
Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 6/9] arm64: perf: Do not call irq_work_run in NMI context
  2019-08-01 13:06   ` Will Deacon
@ 2019-08-02 14:43     ` Julien Thierry
  0 siblings, 0 replies; 24+ messages in thread
From: Julien Thierry @ 2019-08-02 14:43 UTC (permalink / raw)
  To: Will Deacon
  Cc: mark.rutland, peterz, Catalin Marinas, will.deacon, acme,
	alexander.shishkin, mingo, namhyung, sthotton, jolsa,
	linux-arm-kernel, liwei391



On 01/08/2019 14:06, Will Deacon wrote:
> On Wed, Jul 17, 2019 at 09:17:09AM +0100, Julien Thierry wrote:
>> Function irq_work_run is not NMI safe and should not be called from NMI
>> context.
>>
>> When PMU interrupt is an NMI do not call irq_work_run. Instead rely on the
>> IRQ work IPI to run the irq_work queue once NMI/IRQ contexts have been
>> exited.
>>
>> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
>> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
>> Cc: Jiri Olsa <jolsa@redhat.com>
>> Cc: Namhyung Kim <namhyung@kernel.org>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> ---
>>  arch/arm64/kernel/perf_event.c | 14 +++++---------
>>  1 file changed, 5 insertions(+), 9 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
>> index 0e2cf5d..9c959ad 100644
>> --- a/arch/arm64/kernel/perf_event.c
>> +++ b/arch/arm64/kernel/perf_event.c
>> @@ -776,20 +776,16 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
>>  		if (!armpmu_event_set_period(event))
>>  			continue;
>>  
>> +		/*
>> +		 * Perf event overflow will queue the processing of the event as
>> +		 * an irq_work which will be taken care of in the handling of
>> +		 * IPI_IRQ_WORK.
>> +		 */
>>  		if (perf_event_overflow(event, &data, regs))
>>  			cpu_pmu->disable(event);
>>  	}
>>  	armv8pmu_start(cpu_pmu);
>>  
>> -	/*
>> -	 * Handle the pending perf events.
>> -	 *
>> -	 * Note: this call *must* be run with interrupts disabled. For
>> -	 * platforms that can have the PMU interrupts raised as an NMI, this
>> -	 * will not work.
>> -	 */
>> -	irq_work_run();
> 
> What about the case where NMIs are not being used (e.g. GICv2)?
> 

As the comment above mentions. The overflow handler will trigger the
IPI_IRQ_WORK which will call the irq_work_run() both for NMI and normal IRQ.

Unless we really need to process the irq_work here, it makes things
simpler to get rid of the call.

It was suggested during the previous version:
https://www.spinics.net/lists/arm-kernel/msg740027.html

Cheers,

-- 
Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2019-08-02 14:43 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-17  8:17 [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt Julien Thierry
2019-07-17  8:17 ` [PATCH v4 1/9] arm64: perf: avoid PMXEV* indirection Julien Thierry
2019-07-17  8:17 ` [PATCH v4 2/9] arm64: perf: Remove PMU locking Julien Thierry
2019-08-01 12:58   ` Will Deacon
2019-08-02 14:26     ` Julien Thierry
2019-07-17  8:17 ` [PATCH v4 3/9] arm: perf: save/resore pmsel Julien Thierry
2019-08-01 13:01   ` Will Deacon
2019-08-02 14:34     ` Julien Thierry
2019-07-17  8:17 ` [PATCH v4 4/9] arm: perf: Remove Remove PMU locking Julien Thierry
2019-08-01 13:06   ` Will Deacon
2019-08-02 14:36     ` Julien Thierry
2019-07-17  8:17 ` [PATCH v4 5/9] perf/arm_pmu: Move PMU lock to ARMv6 events Julien Thierry
2019-07-17  8:17 ` [PATCH v4 6/9] arm64: perf: Do not call irq_work_run in NMI context Julien Thierry
2019-08-01 13:06   ` Will Deacon
2019-08-02 14:43     ` Julien Thierry
2019-07-17  8:17 ` [PATCH v4 7/9] arm/arm64: kvm: pmu: Make overflow handler NMI safe Julien Thierry
2019-07-17  8:17 ` [PATCH v4 8/9] arm_pmu: Introduce pmu_irq_ops Julien Thierry
2019-07-17  8:17 ` [PATCH v4 9/9] arm_pmu: Use NMIs for PMU Julien Thierry
2019-07-30  9:11   ` Russell King - ARM Linux admin
2019-07-30  9:18     ` Julien Thierry
2019-07-30  9:28       ` Russell King - ARM Linux admin
2019-07-30 14:06         ` Julien Thierry
2019-07-17  9:02 ` [PATCH v4 0/9] arm_pmu: Use NMI for perf interrupt Julien Thierry
2019-07-30  9:05 ` Julien Thierry

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).