linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt
@ 2020-08-19 13:34 Alexandru Elisei
  2020-08-19 13:34 ` [PATCH v6 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_event() Alexandru Elisei
                   ` (8 more replies)
  0 siblings, 9 replies; 19+ messages in thread
From: Alexandru Elisei @ 2020-08-19 13:34 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, sumit.garg, maz, swboyd, catalin.marinas, will

The series makes the arm_pmu driver use NMIs for the perf interrupt when
NMIs are available on the platform (currently, only arm64 + GICv3). To make
it easier to play with the patches, I've pushed a branch at [1]:

$ git clone -b pmu-nmi-v6 git://linux-arm.org/linux-ae

I've tested the series on an espressobin v7*. These are the results of
running perf record -a -- sleep 60:

1. Without the patches:

    16.73%  [k] arch_local_irq_enable
    12.20%  [k] arch_cpu_idle
     8.61%  [k] _raw_spin_unlock_irqrestore
     4.09%  [k] arch_local_irq_enable
     2.25%  [k] arch_local_irq_enable
     1.82%  [k] arch_counter_get_cntpct
     [..]

2. Using NMIs:

     3.37%  [k] arch_counter_get_cntpct
     2.62%  [.] _IO_fwrite
     1.62%  [.] __gconv_transform_ascii_internal
     1.49%  [.] __mbrtowc
     1.44%  [k] el0_svc_common
     1.31%  [.] strchr
     [..]

When running perf record -a -- iperf3 -c 127.0.0.1 -t 60:

1. Without the patches:
    24.25%  [k] __arch_copy_from_user
    20.94%  [k] __arch_copy_to_user
     5.71%  [k] arch_local_irq_enable
     3.12%  [k] _raw_spin_unlock_irqrestore
     2.01%  [k] __free_pages_ok
     1.48%  [k] arch_cpu_idle
     [..]

2. Using NMIs:

    23.15%  [k] __arch_copy_from_user
    21.68%  [k] __arch_copy_to_user
     1.23%  [k] tcp_ack
     1.08%  [k] tcp_sendmsg_locked
     0.97%  [k] rmqueue
     0.91%  [k] __free_pages_ok
     [..]

I've ran the same tests in a VM when both host+guest use NMIs, and when
neither use them. All of these tests were also ran on the model.  Similar
results in all cases.

* All the firmware versions for espressobin v7 that I've tried clear
SCR_EL3.FIQ, which means that NMIs don't work. To make them work on the
board, I modified the GICv3 driver. That's why I would really appreciate
someone testing this series on a board where NMIs work without any GIC
changes. For people who want to test the series, but don't have a board
with firmware that sets SCR_EL3.FIQ, I've pushed a branch [2] with the
GICv3 drivers changes necessary to make NMIs work:

$ git clone -b pmu-nmi-v6-nmi-fiq-clear-v2 git://linux-arm.org/linux-ae

Summary of the patches:
* Patch 1 is a fix for a bug that Julien found during the review for v4.
* Patches 2 and 3 remove locking from arm64 perf event code.
* Patches 4 and 5 makes the arm64 PMU interrupt handler NMI safe.
* Patches 6 and 7 enable the use of NMIs on arm64 with a GICv3 irqchip.

Changes since v5 [3]:
- Rebased on top of v5.9-rc1.
- Typo fixes.
- Added comments to the ISB added by patches #1 and #2.
- Reworded message for patch #4, as per Mark's excellent suggestion.

Changes since v4 [4]:
- Rebased on top of v5.8-rc1 and dropped the Tested-by tags because it's
  been almost a year since the series has been tested.
- Dropped patch 3 because I couldn't find any instance where
  armv7pmu_read_counter() was called with interrupts enabled. I've also
  tested this by running several instances of perf for a few hours, and the
  function was called every time with interrupts disabled.
- Dropped patches 4 and 5 because the tradeoff wasn't worth it in my
  opinion: the irq handler was slower all the time (because it
  saved/restored the counter select register), in exchange for being
  slightly faster on the rare ocassions when it triggered at the beginning
  of the critical sections.
- Minor changes here and there to address review comments.

Changes since v3 [5]:
- Added tags
- Fix build issue for perf_event_v6
- Don't disable preemption in pmu->enable()
- Always rely on IPI_IRQ_WORK to run the queued work
- Fixed typos + cleanups

Changes since v2 [6]:
- Rebased on recent linux-next (next-20190708)
- Fixed a number of bugs with indices (reported by Wei)
- Minor style fixes

Changes since v1 [7]:
- Rebased on v5.1-rc1
- Pseudo-NMI has changed a lot since then, use the (now merged) NMI API
- Remove locking from armv7 perf_event
- Use locking only in armv6 perf_event
- Use direct counter/type registers insted of selector register for armv8

[1] http://www.linux-arm.org/git?p=linux-ae.git;a=shortlog;h=refs/heads/pmu-nmi-v6
[2] http://www.linux-arm.org/git?p=linux-ae.git;a=shortlog;h=refs/heads/pmu-nmi-v6-nmi-fiq-clear-v2
[3] https://www.spinics.net/lists/kernel/msg3554236.html
[4] https://lists.infradead.org/pipermail/linux-arm-kernel/2019-July/666824.html
[5] https://lists.infradead.org/pipermail/linux-arm-kernel/2019-July/665339.html
[6] https://lists.infradead.org/pipermail/linux-arm-kernel/2019-March/640536.html
[7] https://lists.infradead.org/pipermail/linux-arm-kernel/2018-January/554611.html

Alexandru Elisei (1):
  arm64: perf: Add missing ISB in armv8pmu_enable_event()

Julien Thierry (5):
  arm64: perf: Remove PMU locking
  arm64: perf: Defer irq_work to IPI_IRQ_WORK
  KVM: arm64: pmu: Make overflow handler NMI safe
  arm_pmu: Introduce pmu_irq_ops
  arm_pmu: arm64: Use NMIs for PMU

Mark Rutland (1):
  arm64: perf: Avoid PMXEV* indirection

 arch/arm64/kernel/perf_event.c | 145 +++++++++++++++++++++------------
 arch/arm64/kvm/pmu-emul.c      |  25 +++++-
 drivers/perf/arm_pmu.c         | 142 +++++++++++++++++++++++++++-----
 include/kvm/arm_pmu.h          |   1 +
 4 files changed, 241 insertions(+), 72 deletions(-)

-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v6 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_event()
  2020-08-19 13:34 [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
@ 2020-08-19 13:34 ` Alexandru Elisei
  2020-08-19 13:34 ` [PATCH v6 2/7] arm64: perf: Avoid PMXEV* indirection Alexandru Elisei
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Alexandru Elisei @ 2020-08-19 13:34 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, sumit.garg, Julien Thierry, Peter Zijlstra, maz,
	Jiri Olsa, Will Deacon, Arnaldo Carvalho de Melo, swboyd,
	Alexander Shishkin, Ingo Molnar, catalin.marinas, Namhyung Kim,
	will, Julien Thierry

Writes to the PMXEVTYPER_EL0 register are not self-synchronising. In
armv8pmu_enable_event(), the PE can reorder configuring the event type
after we have enabled the counter and the interrupt. This can lead to an
interrupt being asserted because of the previous event type that we were
counting using the same counter, not the one that we've just configured.

The same rationale applies to writes to the PMINTENSET_EL1 register. The PE
can reorder enabling the interrupt at any point in the future after we have
enabled the event.

Prevent both situations from happening by adding an ISB just before we
enable the event counter.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Fixes: 030896885ade ("arm64: Performance counters support")
Reported-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kernel/perf_event.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 462f9a9cc44b..878e7087be02 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -632,8 +632,10 @@ static void armv8pmu_enable_event(struct perf_event *event)
 	armv8pmu_enable_event_irq(event);
 
 	/*
-	 * Enable counter
+	 * Enable counter. Make sure event configuration register writes are
+	 * visible before we enable the counter.
 	 */
+	isb();
 	armv8pmu_enable_event_counter(event);
 
 	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v6 2/7] arm64: perf: Avoid PMXEV* indirection
  2020-08-19 13:34 [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
  2020-08-19 13:34 ` [PATCH v6 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_event() Alexandru Elisei
@ 2020-08-19 13:34 ` Alexandru Elisei
  2020-09-21 13:43   ` Will Deacon
  2020-08-19 13:34 ` [PATCH v6 3/7] arm64: perf: Remove PMU locking Alexandru Elisei
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 19+ messages in thread
From: Alexandru Elisei @ 2020-08-19 13:34 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, sumit.garg, Julien Thierry, Peter Zijlstra, maz,
	Jiri Olsa, Will Deacon, Arnaldo Carvalho de Melo, swboyd,
	Alexander Shishkin, Ingo Molnar, catalin.marinas, Namhyung Kim,
	will, Julien Thierry

From: Mark Rutland <mark.rutland@arm.com>

Currently we access the counter registers and their respective type
registers indirectly. This requires us to write to PMSELR, issue an ISB,
then access the relevant PMXEV* registers.

This is unfortunate, because:

* Under virtualization, accessing one register requires two traps to
  the hypervisor, even though we could access the register directly with
  a single trap.

* We have to issue an ISB which we could otherwise avoid the cost of.

* When we use NMIs, the NMI handler will have to save/restore the select
  register in case the code it preempted was attempting to access a
  counter or its type register.

We can avoid these issues by directly accessing the relevant registers.
This patch adds helpers to do so.

In armv8pmu_enable_event() we still need the ISB to prevent the PE from
reordering the write to PMINTENSET_EL1 register. If the interrupt is
enabled before we disable the counter and the new event is configured,
we might get an interrupt triggered by the previously programmed event
overflowing, but which we wrongly attribute to the event that we are
enabling.

In the process, remove the comment that refers to the ARMv7 PMU.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[Julien T.: Don't inline read/write functions to avoid big code-size
	increase, remove unused read_pmevtypern function,
	fix counter index issue.]
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[Removed comment, removed trailing semicolons in macros, added ISB]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kernel/perf_event.c | 99 +++++++++++++++++++++++++++++-----
 1 file changed, 85 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 878e7087be02..ac818abd65b6 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -348,6 +348,73 @@ static inline bool armv8pmu_event_is_chained(struct perf_event *event)
 #define	ARMV8_IDX_TO_COUNTER(x)	\
 	(((x) - ARMV8_IDX_COUNTER0) & ARMV8_PMU_COUNTER_MASK)
 
+/*
+ * This code is really good
+ */
+
+#define PMEVN_CASE(n, case_macro) \
+	case n: case_macro(n); break
+
+#define PMEVN_SWITCH(x, case_macro)				\
+	do {							\
+		switch (x) {					\
+		PMEVN_CASE(0,  case_macro);			\
+		PMEVN_CASE(1,  case_macro);			\
+		PMEVN_CASE(2,  case_macro);			\
+		PMEVN_CASE(3,  case_macro);			\
+		PMEVN_CASE(4,  case_macro);			\
+		PMEVN_CASE(5,  case_macro);			\
+		PMEVN_CASE(6,  case_macro);			\
+		PMEVN_CASE(7,  case_macro);			\
+		PMEVN_CASE(8,  case_macro);			\
+		PMEVN_CASE(9,  case_macro);			\
+		PMEVN_CASE(10, case_macro);			\
+		PMEVN_CASE(11, case_macro);			\
+		PMEVN_CASE(12, case_macro);			\
+		PMEVN_CASE(13, case_macro);			\
+		PMEVN_CASE(14, case_macro);			\
+		PMEVN_CASE(15, case_macro);			\
+		PMEVN_CASE(16, case_macro);			\
+		PMEVN_CASE(17, case_macro);			\
+		PMEVN_CASE(18, case_macro);			\
+		PMEVN_CASE(19, case_macro);			\
+		PMEVN_CASE(20, case_macro);			\
+		PMEVN_CASE(21, case_macro);			\
+		PMEVN_CASE(22, case_macro);			\
+		PMEVN_CASE(23, case_macro);			\
+		PMEVN_CASE(24, case_macro);			\
+		PMEVN_CASE(25, case_macro);			\
+		PMEVN_CASE(26, case_macro);			\
+		PMEVN_CASE(27, case_macro);			\
+		PMEVN_CASE(28, case_macro);			\
+		PMEVN_CASE(29, case_macro);			\
+		PMEVN_CASE(30, case_macro);			\
+		default: WARN(1, "Invalid PMEV* index\n");	\
+		}						\
+	} while (0)
+
+#define RETURN_READ_PMEVCNTRN(n) \
+	return read_sysreg(pmevcntr##n##_el0)
+static unsigned long read_pmevcntrn(int n)
+{
+	PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
+	return 0;
+}
+
+#define WRITE_PMEVCNTRN(n) \
+	write_sysreg(val, pmevcntr##n##_el0)
+static void write_pmevcntrn(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
+}
+
+#define WRITE_PMEVTYPERN(n) \
+	write_sysreg(val, pmevtyper##n##_el0)
+static void write_pmevtypern(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
+}
+
 static inline u32 armv8pmu_pmcr_read(void)
 {
 	return read_sysreg(pmcr_el0);
@@ -376,17 +443,11 @@ static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx)
 	return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx));
 }
 
-static inline void armv8pmu_select_counter(int idx)
+static inline u32 armv8pmu_read_evcntr(int idx)
 {
 	u32 counter = ARMV8_IDX_TO_COUNTER(idx);
-	write_sysreg(counter, pmselr_el0);
-	isb();
-}
 
-static inline u64 armv8pmu_read_evcntr(int idx)
-{
-	armv8pmu_select_counter(idx);
-	return read_sysreg(pmxevcntr_el0);
+	return read_pmevcntrn(counter);
 }
 
 static inline u64 armv8pmu_read_hw_counter(struct perf_event *event)
@@ -458,8 +519,9 @@ static u64 armv8pmu_read_counter(struct perf_event *event)
 
 static inline void armv8pmu_write_evcntr(int idx, u64 value)
 {
-	armv8pmu_select_counter(idx);
-	write_sysreg(value, pmxevcntr_el0);
+	u32 counter = ARMV8_IDX_TO_COUNTER(idx);
+
+	write_pmevcntrn(counter, value);
 }
 
 static inline void armv8pmu_write_hw_counter(struct perf_event *event,
@@ -494,9 +556,10 @@ static void armv8pmu_write_counter(struct perf_event *event, u64 value)
 
 static inline void armv8pmu_write_evtype(int idx, u32 val)
 {
-	armv8pmu_select_counter(idx);
+	u32 counter = ARMV8_IDX_TO_COUNTER(idx);
+
 	val &= ARMV8_PMU_EVTYPE_MASK;
-	write_sysreg(val, pmxevtyper_el0);
+	write_pmevtypern(counter, val);
 }
 
 static inline void armv8pmu_write_event_type(struct perf_event *event)
@@ -516,7 +579,10 @@ static inline void armv8pmu_write_event_type(struct perf_event *event)
 		armv8pmu_write_evtype(idx - 1, hwc->config_base);
 		armv8pmu_write_evtype(idx, chain_evt);
 	} else {
-		armv8pmu_write_evtype(idx, hwc->config_base);
+		if (idx == ARMV8_IDX_CYCLE_COUNTER)
+			write_sysreg(hwc->config_base, pmccfiltr_el0);
+		else
+			armv8pmu_write_evtype(idx, hwc->config_base);
 	}
 }
 
@@ -620,9 +686,14 @@ static void armv8pmu_enable_event(struct perf_event *event)
 	 * Disable counter
 	 */
 	armv8pmu_disable_event_counter(event);
+	/*
+	 * Make sure the effects of disabling the counter are visible before we
+	 * start configuring the event.
+	 */
+	isb();
 
 	/*
-	 * Set event (if destined for PMNx counters).
+	 * Set event.
 	 */
 	armv8pmu_write_event_type(event);
 
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v6 3/7] arm64: perf: Remove PMU locking
  2020-08-19 13:34 [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
  2020-08-19 13:34 ` [PATCH v6 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_event() Alexandru Elisei
  2020-08-19 13:34 ` [PATCH v6 2/7] arm64: perf: Avoid PMXEV* indirection Alexandru Elisei
@ 2020-08-19 13:34 ` Alexandru Elisei
  2020-08-19 13:34 ` [PATCH v6 4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK Alexandru Elisei
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Alexandru Elisei @ 2020-08-19 13:34 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, sumit.garg, Julien Thierry, Peter Zijlstra, maz,
	Jiri Olsa, Will Deacon, Arnaldo Carvalho de Melo, swboyd,
	Alexander Shishkin, Ingo Molnar, catalin.marinas, Namhyung Kim,
	will

From: Julien Thierry <julien.thierry@arm.com>

The PMU is disabled and enabled, and the counters are programmed from
contexts where interrupts or preemption is disabled.

The functions to toggle the PMU and to program the PMU counters access the
registers directly and don't access data modified by the interrupt handler.
That, and the fact that they're always called from non-preemptible
contexts, means that we don't need to disable interrupts or use a spinlock.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[Explained why locking is not needed, removed WARN_ONs]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kernel/perf_event.c | 28 ----------------------------
 1 file changed, 28 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index ac818abd65b6..80744c2f1454 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -672,15 +672,10 @@ static inline u32 armv8pmu_getreset_flags(void)
 
 static void armv8pmu_enable_event(struct perf_event *event)
 {
-	unsigned long flags;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
 	/*
 	 * Enable counter and interrupt, and set the counter to count
 	 * the event that we're interested in.
 	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
 
 	/*
 	 * Disable counter
@@ -708,21 +703,10 @@ static void armv8pmu_enable_event(struct perf_event *event)
 	 */
 	isb();
 	armv8pmu_enable_event_counter(event);
-
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
 static void armv8pmu_disable_event(struct perf_event *event)
 {
-	unsigned long flags;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	/*
-	 * Disable counter and interrupt
-	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
-
 	/*
 	 * Disable counter
 	 */
@@ -732,30 +716,18 @@ static void armv8pmu_disable_event(struct perf_event *event)
 	 * Disable interrupt for this counter
 	 */
 	armv8pmu_disable_event_irq(event);
-
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
 static void armv8pmu_start(struct arm_pmu *cpu_pmu)
 {
-	unsigned long flags;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
 	/* Enable all counters */
 	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
 static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
 {
-	unsigned long flags;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
 	/* Disable all counters */
 	armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
 static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v6 4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK
  2020-08-19 13:34 [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
                   ` (2 preceding siblings ...)
  2020-08-19 13:34 ` [PATCH v6 3/7] arm64: perf: Remove PMU locking Alexandru Elisei
@ 2020-08-19 13:34 ` Alexandru Elisei
  2020-08-19 14:23   ` peterz
  2020-08-19 13:34 ` [PATCH v6 5/7] KVM: arm64: pmu: Make overflow handler NMI safe Alexandru Elisei
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 19+ messages in thread
From: Alexandru Elisei @ 2020-08-19 13:34 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, sumit.garg, Julien Thierry, Peter Zijlstra, maz,
	Jiri Olsa, Will Deacon, Arnaldo Carvalho de Melo, swboyd,
	Alexander Shishkin, Ingo Molnar, catalin.marinas, Namhyung Kim,
	will, Julien Thierry

From: Julien Thierry <julien.thierry@arm.com>

When handling events, armv8pmu_handle_irq() calls perf_event_overflow(),
and subsequently calls irq_work_run() to handle any work queued by
perf_event_overflow(). As perf_event_overflow() raises IPI_IRQ_WORK when
queuing the work, this isn't strictly necessary and the work could be
handled as part of the IPI_IRQ_WORK handler.

In the common case the IPI handler will run immediately after the PMU IRQ
handler, and where the PE is heavily loaded with interrupts other handlers
may run first, widening the window where some counters are disabled.

In practice this window is unlikely to be a significant issue, and removing
the call to irq_work_run() would make the PMU IRQ handler NMI safe in
addition to making it simpler, so let's do that.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[Reworded commit]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kernel/perf_event.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 80744c2f1454..5bf283518981 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -780,20 +780,16 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
 		if (!armpmu_event_set_period(event))
 			continue;
 
+		/*
+		 * Perf event overflow will queue the processing of the event as
+		 * an irq_work which will be taken care of in the handling of
+		 * IPI_IRQ_WORK.
+		 */
 		if (perf_event_overflow(event, &data, regs))
 			cpu_pmu->disable(event);
 	}
 	armv8pmu_start(cpu_pmu);
 
-	/*
-	 * Handle the pending perf events.
-	 *
-	 * Note: this call *must* be run with interrupts disabled. For
-	 * platforms that can have the PMU interrupts raised as an NMI, this
-	 * will not work.
-	 */
-	irq_work_run();
-
 	return IRQ_HANDLED;
 }
 
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v6 5/7] KVM: arm64: pmu: Make overflow handler NMI safe
  2020-08-19 13:34 [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
                   ` (3 preceding siblings ...)
  2020-08-19 13:34 ` [PATCH v6 4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK Alexandru Elisei
@ 2020-08-19 13:34 ` Alexandru Elisei
  2020-09-21 13:43   ` Will Deacon
  2020-08-19 13:34 ` [PATCH v6 6/7] arm_pmu: Introduce pmu_irq_ops Alexandru Elisei
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 19+ messages in thread
From: Alexandru Elisei @ 2020-08-19 13:34 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, sumit.garg, kvm, Julien Thierry, Marc Zyngier, maz,
	Suzuki K Pouloze, Will Deacon, swboyd, James Morse,
	catalin.marinas, will, kvmarm, Julien Thierry

From: Julien Thierry <julien.thierry@arm.com>

kvm_vcpu_kick() is not NMI safe. When the overflow handler is called from
NMI context, defer waking the vcpu to an irq_work queue.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Pouloze <suzuki.poulose@arm.com>
Cc: kvm@vger.kernel.org
Cc: kvmarm@lists.cs.columbia.edu
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kvm/pmu-emul.c | 25 ++++++++++++++++++++++++-
 include/kvm/arm_pmu.h     |  1 +
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index f0d0312c0a55..30268397ed06 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -433,6 +433,22 @@ void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
 	kvm_pmu_update_state(vcpu);
 }
 
+/**
+ * When perf interrupt is an NMI, we cannot safely notify the vcpu corresponding
+ * to the event.
+ * This is why we need a callback to do it once outside of the NMI context.
+ */
+static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work)
+{
+	struct kvm_vcpu *vcpu;
+	struct kvm_pmu *pmu;
+
+	pmu = container_of(work, struct kvm_pmu, overflow_work);
+	vcpu = kvm_pmc_to_vcpu(&pmu->pmc[0]);
+
+	kvm_vcpu_kick(vcpu);
+}
+
 /**
  * When the perf event overflows, set the overflow status and inform the vcpu.
  */
@@ -465,7 +481,11 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
 
 	if (kvm_pmu_overflow_status(vcpu)) {
 		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
-		kvm_vcpu_kick(vcpu);
+
+		if (!in_nmi())
+			kvm_vcpu_kick(vcpu);
+		else
+			irq_work_queue(&vcpu->arch.pmu.overflow_work);
 	}
 
 	cpu_pmu->pmu.start(perf_event, PERF_EF_RELOAD);
@@ -764,6 +784,9 @@ static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu)
 			return ret;
 	}
 
+	init_irq_work(&vcpu->arch.pmu.overflow_work,
+		      kvm_pmu_perf_overflow_notify_vcpu);
+
 	vcpu->arch.pmu.created = true;
 	return 0;
 }
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 6db030439e29..dbf4f08d42e5 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -27,6 +27,7 @@ struct kvm_pmu {
 	bool ready;
 	bool created;
 	bool irq_level;
+	struct irq_work overflow_work;
 };
 
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v6 6/7] arm_pmu: Introduce pmu_irq_ops
  2020-08-19 13:34 [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
                   ` (4 preceding siblings ...)
  2020-08-19 13:34 ` [PATCH v6 5/7] KVM: arm64: pmu: Make overflow handler NMI safe Alexandru Elisei
@ 2020-08-19 13:34 ` Alexandru Elisei
  2020-09-21 13:55   ` Will Deacon
  2020-08-19 13:34 ` [PATCH v6 7/7] arm_pmu: arm64: Use NMIs for PMU Alexandru Elisei
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 19+ messages in thread
From: Alexandru Elisei @ 2020-08-19 13:34 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, sumit.garg, Julien Thierry, maz, Will Deacon,
	swboyd, catalin.marinas, will, Julien Thierry

From: Julien Thierry <julien.thierry@arm.com>

Currently the PMU interrupt can either be a normal irq or a percpu irq.
Supporting NMI will introduce two cases for each existing one. It becomes
a mess of 'if's when managing the interrupt.

Define sets of callbacks for operations commonly done on the interrupt. The
appropriate set of callbacks is selected at interrupt request time and
simplifies interrupt enabling/disabling and freeing.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 drivers/perf/arm_pmu.c | 86 ++++++++++++++++++++++++++++++++++--------
 1 file changed, 70 insertions(+), 16 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index df352b334ea7..17e5952d21e4 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -26,8 +26,46 @@
 
 #include <asm/irq_regs.h>
 
+static int armpmu_count_irq_users(const int irq);
+
+struct pmu_irq_ops {
+	void (*enable_pmuirq)(unsigned int irq);
+	void (*disable_pmuirq)(unsigned int irq);
+	void (*free_pmuirq)(unsigned int irq, int cpu, void __percpu *devid);
+};
+
+static void armpmu_free_pmuirq(unsigned int irq, int cpu, void __percpu *devid)
+{
+	free_irq(irq, per_cpu_ptr(devid, cpu));
+}
+
+static const struct pmu_irq_ops pmuirq_ops = {
+	.enable_pmuirq = enable_irq,
+	.disable_pmuirq = disable_irq_nosync,
+	.free_pmuirq = armpmu_free_pmuirq
+};
+
+static void armpmu_enable_percpu_pmuirq(unsigned int irq)
+{
+	enable_percpu_irq(irq, IRQ_TYPE_NONE);
+}
+
+static void armpmu_free_percpu_pmuirq(unsigned int irq, int cpu,
+				   void __percpu *devid)
+{
+	if (armpmu_count_irq_users(irq) == 1)
+		free_percpu_irq(irq, devid);
+}
+
+static const struct pmu_irq_ops percpu_pmuirq_ops = {
+	.enable_pmuirq = armpmu_enable_percpu_pmuirq,
+	.disable_pmuirq = disable_percpu_irq,
+	.free_pmuirq = armpmu_free_percpu_pmuirq
+};
+
 static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
 static DEFINE_PER_CPU(int, cpu_irq);
+static DEFINE_PER_CPU(const struct pmu_irq_ops *, cpu_irq_ops);
 
 static inline u64 arm_pmu_event_max_period(struct perf_event *event)
 {
@@ -544,6 +582,19 @@ static int armpmu_count_irq_users(const int irq)
 	return count;
 }
 
+static const struct pmu_irq_ops *armpmu_find_irq_ops(int irq)
+{
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		if (per_cpu(cpu_irq, cpu) == irq
+		    && per_cpu(cpu_irq_ops, cpu))
+			return per_cpu(cpu_irq_ops, cpu);
+	}
+
+	return NULL;
+}
+
 void armpmu_free_irq(int irq, int cpu)
 {
 	if (per_cpu(cpu_irq, cpu) == 0)
@@ -551,18 +602,18 @@ void armpmu_free_irq(int irq, int cpu)
 	if (WARN_ON(irq != per_cpu(cpu_irq, cpu)))
 		return;
 
-	if (!irq_is_percpu_devid(irq))
-		free_irq(irq, per_cpu_ptr(&cpu_armpmu, cpu));
-	else if (armpmu_count_irq_users(irq) == 1)
-		free_percpu_irq(irq, &cpu_armpmu);
+	per_cpu(cpu_irq_ops, cpu)->free_pmuirq(irq, cpu, &cpu_armpmu);
 
 	per_cpu(cpu_irq, cpu) = 0;
+	per_cpu(cpu_irq_ops, cpu) = NULL;
 }
 
 int armpmu_request_irq(int irq, int cpu)
 {
 	int err = 0;
 	const irq_handler_t handler = armpmu_dispatch_irq;
+	const struct pmu_irq_ops *irq_ops;
+
 	if (!irq)
 		return 0;
 
@@ -584,15 +635,26 @@ int armpmu_request_irq(int irq, int cpu)
 		irq_set_status_flags(irq, IRQ_NOAUTOEN);
 		err = request_irq(irq, handler, irq_flags, "arm-pmu",
 				  per_cpu_ptr(&cpu_armpmu, cpu));
+
+		irq_ops = &pmuirq_ops;
 	} else if (armpmu_count_irq_users(irq) == 0) {
 		err = request_percpu_irq(irq, handler, "arm-pmu",
 					 &cpu_armpmu);
+
+		irq_ops = &percpu_pmuirq_ops;
+	} else {
+		/* Per cpudevid irq was already requested by another CPU */
+		irq_ops = armpmu_find_irq_ops(irq);
+
+		if (WARN_ON(!irq_ops))
+			err = -EINVAL;
 	}
 
 	if (err)
 		goto err_out;
 
 	per_cpu(cpu_irq, cpu) = irq;
+	per_cpu(cpu_irq_ops, cpu) = irq_ops;
 	return 0;
 
 err_out:
@@ -625,12 +687,8 @@ static int arm_perf_starting_cpu(unsigned int cpu, struct hlist_node *node)
 	per_cpu(cpu_armpmu, cpu) = pmu;
 
 	irq = armpmu_get_cpu_irq(pmu, cpu);
-	if (irq) {
-		if (irq_is_percpu_devid(irq))
-			enable_percpu_irq(irq, IRQ_TYPE_NONE);
-		else
-			enable_irq(irq);
-	}
+	if (irq)
+		per_cpu(cpu_irq_ops, cpu)->enable_pmuirq(irq);
 
 	return 0;
 }
@@ -644,12 +702,8 @@ static int arm_perf_teardown_cpu(unsigned int cpu, struct hlist_node *node)
 		return 0;
 
 	irq = armpmu_get_cpu_irq(pmu, cpu);
-	if (irq) {
-		if (irq_is_percpu_devid(irq))
-			disable_percpu_irq(irq);
-		else
-			disable_irq_nosync(irq);
-	}
+	if (irq)
+		per_cpu(cpu_irq_ops, cpu)->disable_pmuirq(irq);
 
 	per_cpu(cpu_armpmu, cpu) = NULL;
 
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v6 7/7] arm_pmu: arm64: Use NMIs for PMU
  2020-08-19 13:34 [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
                   ` (5 preceding siblings ...)
  2020-08-19 13:34 ` [PATCH v6 6/7] arm_pmu: Introduce pmu_irq_ops Alexandru Elisei
@ 2020-08-19 13:34 ` Alexandru Elisei
  2020-09-04  8:58 ` [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt Sumit Garg
  2020-09-21 13:59 ` Will Deacon
  8 siblings, 0 replies; 19+ messages in thread
From: Alexandru Elisei @ 2020-08-19 13:34 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, sumit.garg, Julien Thierry, maz, Will Deacon,
	swboyd, catalin.marinas, will, Julien Thierry

From: Julien Thierry <julien.thierry@arm.com>

Add required PMU interrupt operations for NMIs. Request interrupt lines as
NMIs when possible, otherwise fall back to normal interrupts.

NMIs are only supported on the arm64 architecture with a GICv3 irqchip.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[Added that NMIs only work on arm64 + GICv3]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 drivers/perf/arm_pmu.c | 62 ++++++++++++++++++++++++++++++++++++++----
 1 file changed, 56 insertions(+), 6 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 17e5952d21e4..dd9d7f61ee29 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -45,6 +45,17 @@ static const struct pmu_irq_ops pmuirq_ops = {
 	.free_pmuirq = armpmu_free_pmuirq
 };
 
+static void armpmu_free_pmunmi(unsigned int irq, int cpu, void __percpu *devid)
+{
+	free_nmi(irq, per_cpu_ptr(devid, cpu));
+}
+
+static const struct pmu_irq_ops pmunmi_ops = {
+	.enable_pmuirq = enable_nmi,
+	.disable_pmuirq = disable_nmi_nosync,
+	.free_pmuirq = armpmu_free_pmunmi
+};
+
 static void armpmu_enable_percpu_pmuirq(unsigned int irq)
 {
 	enable_percpu_irq(irq, IRQ_TYPE_NONE);
@@ -63,6 +74,31 @@ static const struct pmu_irq_ops percpu_pmuirq_ops = {
 	.free_pmuirq = armpmu_free_percpu_pmuirq
 };
 
+static void armpmu_enable_percpu_pmunmi(unsigned int irq)
+{
+	if (!prepare_percpu_nmi(irq))
+		enable_percpu_nmi(irq, IRQ_TYPE_NONE);
+}
+
+static void armpmu_disable_percpu_pmunmi(unsigned int irq)
+{
+	disable_percpu_nmi(irq);
+	teardown_percpu_nmi(irq);
+}
+
+static void armpmu_free_percpu_pmunmi(unsigned int irq, int cpu,
+				      void __percpu *devid)
+{
+	if (armpmu_count_irq_users(irq) == 1)
+		free_percpu_nmi(irq, devid);
+}
+
+static const struct pmu_irq_ops percpu_pmunmi_ops = {
+	.enable_pmuirq = armpmu_enable_percpu_pmunmi,
+	.disable_pmuirq = armpmu_disable_percpu_pmunmi,
+	.free_pmuirq = armpmu_free_percpu_pmunmi
+};
+
 static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
 static DEFINE_PER_CPU(int, cpu_irq);
 static DEFINE_PER_CPU(const struct pmu_irq_ops *, cpu_irq_ops);
@@ -633,15 +669,29 @@ int armpmu_request_irq(int irq, int cpu)
 			    IRQF_NO_THREAD;
 
 		irq_set_status_flags(irq, IRQ_NOAUTOEN);
-		err = request_irq(irq, handler, irq_flags, "arm-pmu",
+
+		err = request_nmi(irq, handler, irq_flags, "arm-pmu",
 				  per_cpu_ptr(&cpu_armpmu, cpu));
 
-		irq_ops = &pmuirq_ops;
+		/* If cannot get an NMI, get a normal interrupt */
+		if (err) {
+			err = request_irq(irq, handler, irq_flags, "arm-pmu",
+					  per_cpu_ptr(&cpu_armpmu, cpu));
+			irq_ops = &pmuirq_ops;
+		} else {
+			irq_ops = &pmunmi_ops;
+		}
 	} else if (armpmu_count_irq_users(irq) == 0) {
-		err = request_percpu_irq(irq, handler, "arm-pmu",
-					 &cpu_armpmu);
-
-		irq_ops = &percpu_pmuirq_ops;
+		err = request_percpu_nmi(irq, handler, "arm-pmu", &cpu_armpmu);
+
+		/* If cannot get an NMI, get a normal interrupt */
+		if (err) {
+			err = request_percpu_irq(irq, handler, "arm-pmu",
+						 &cpu_armpmu);
+			irq_ops = &percpu_pmuirq_ops;
+		} else {
+			irq_ops = &percpu_pmunmi_ops;
+		}
 	} else {
 		/* Per cpudevid irq was already requested by another CPU */
 		irq_ops = armpmu_find_irq_ops(irq);
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v6 4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK
  2020-08-19 13:34 ` [PATCH v6 4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK Alexandru Elisei
@ 2020-08-19 14:23   ` peterz
  0 siblings, 0 replies; 19+ messages in thread
From: peterz @ 2020-08-19 14:23 UTC (permalink / raw)
  To: Alexandru Elisei
  Cc: mark.rutland, sumit.garg, Julien Thierry, Alexander Shishkin,
	maz, Jiri Olsa, Will Deacon, linux-kernel,
	Arnaldo Carvalho de Melo, swboyd, Ingo Molnar, Julien Thierry,
	catalin.marinas, Namhyung Kim, will, linux-arm-kernel

On Wed, Aug 19, 2020 at 02:34:16PM +0100, Alexandru Elisei wrote:
> From: Julien Thierry <julien.thierry@arm.com>
> 
> When handling events, armv8pmu_handle_irq() calls perf_event_overflow(),
> and subsequently calls irq_work_run() to handle any work queued by
> perf_event_overflow(). As perf_event_overflow() raises IPI_IRQ_WORK when
> queuing the work, this isn't strictly necessary and the work could be
> handled as part of the IPI_IRQ_WORK handler.
> 
> In the common case the IPI handler will run immediately after the PMU IRQ
> handler, and where the PE is heavily loaded with interrupts other handlers
> may run first, widening the window where some counters are disabled.
> 
> In practice this window is unlikely to be a significant issue, and removing
> the call to irq_work_run() would make the PMU IRQ handler NMI safe in
> addition to making it simpler, so let's do that.

Makes sense, IIRC this code was written before ARM grew IPI_IRQ_WORK
support and then it makes sense, but now that you have it and are moving
to NMI-like context this is absolutely the right thing to do.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt
  2020-08-19 13:34 [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
                   ` (6 preceding siblings ...)
  2020-08-19 13:34 ` [PATCH v6 7/7] arm_pmu: arm64: Use NMIs for PMU Alexandru Elisei
@ 2020-09-04  8:58 ` Sumit Garg
  2020-09-21 13:59 ` Will Deacon
  8 siblings, 0 replies; 19+ messages in thread
From: Sumit Garg @ 2020-09-04  8:58 UTC (permalink / raw)
  To: Alexandru Elisei
  Cc: Mark Rutland, Marc Zyngier, Linux Kernel Mailing List,
	Stephen Boyd, Catalin Marinas, Will Deacon, linux-arm-kernel

Hi Alex,

On Wed, 19 Aug 2020 at 19:03, Alexandru Elisei <alexandru.elisei@arm.com> wrote:
>
> The series makes the arm_pmu driver use NMIs for the perf interrupt when
> NMIs are available on the platform (currently, only arm64 + GICv3). To make
> it easier to play with the patches, I've pushed a branch at [1]:
>
> $ git clone -b pmu-nmi-v6 git://linux-arm.org/linux-ae
>
> I've tested the series on an espressobin v7*. These are the results of
> running perf record -a -- sleep 60:
>
> 1. Without the patches:
>
>     16.73%  [k] arch_local_irq_enable
>     12.20%  [k] arch_cpu_idle
>      8.61%  [k] _raw_spin_unlock_irqrestore
>      4.09%  [k] arch_local_irq_enable
>      2.25%  [k] arch_local_irq_enable
>      1.82%  [k] arch_counter_get_cntpct
>      [..]
>
> 2. Using NMIs:
>
>      3.37%  [k] arch_counter_get_cntpct
>      2.62%  [.] _IO_fwrite
>      1.62%  [.] __gconv_transform_ascii_internal
>      1.49%  [.] __mbrtowc
>      1.44%  [k] el0_svc_common
>      1.31%  [.] strchr
>      [..]
>
> When running perf record -a -- iperf3 -c 127.0.0.1 -t 60:
>
> 1. Without the patches:
>     24.25%  [k] __arch_copy_from_user
>     20.94%  [k] __arch_copy_to_user
>      5.71%  [k] arch_local_irq_enable
>      3.12%  [k] _raw_spin_unlock_irqrestore
>      2.01%  [k] __free_pages_ok
>      1.48%  [k] arch_cpu_idle
>      [..]
>
> 2. Using NMIs:
>
>     23.15%  [k] __arch_copy_from_user
>     21.68%  [k] __arch_copy_to_user
>      1.23%  [k] tcp_ack
>      1.08%  [k] tcp_sendmsg_locked
>      0.97%  [k] rmqueue
>      0.91%  [k] __free_pages_ok
>      [..]
>
> I've ran the same tests in a VM when both host+guest use NMIs, and when
> neither use them. All of these tests were also ran on the model.  Similar
> results in all cases.
>
> * All the firmware versions for espressobin v7 that I've tried clear
> SCR_EL3.FIQ, which means that NMIs don't work. To make them work on the
> board, I modified the GICv3 driver. That's why I would really appreciate
> someone testing this series on a board where NMIs work without any GIC
> changes.

This series works perfectly fine for me without any GIC changes on
Developerbox. FWIW:

Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)

-Sumit

> For people who want to test the series, but don't have a board
> with firmware that sets SCR_EL3.FIQ, I've pushed a branch [2] with the
> GICv3 drivers changes necessary to make NMIs work:
>
> $ git clone -b pmu-nmi-v6-nmi-fiq-clear-v2 git://linux-arm.org/linux-ae
>
> Summary of the patches:
> * Patch 1 is a fix for a bug that Julien found during the review for v4.
> * Patches 2 and 3 remove locking from arm64 perf event code.
> * Patches 4 and 5 makes the arm64 PMU interrupt handler NMI safe.
> * Patches 6 and 7 enable the use of NMIs on arm64 with a GICv3 irqchip.
>
> Changes since v5 [3]:
> - Rebased on top of v5.9-rc1.
> - Typo fixes.
> - Added comments to the ISB added by patches #1 and #2.
> - Reworded message for patch #4, as per Mark's excellent suggestion.
>
> Changes since v4 [4]:
> - Rebased on top of v5.8-rc1 and dropped the Tested-by tags because it's
>   been almost a year since the series has been tested.
> - Dropped patch 3 because I couldn't find any instance where
>   armv7pmu_read_counter() was called with interrupts enabled. I've also
>   tested this by running several instances of perf for a few hours, and the
>   function was called every time with interrupts disabled.
> - Dropped patches 4 and 5 because the tradeoff wasn't worth it in my
>   opinion: the irq handler was slower all the time (because it
>   saved/restored the counter select register), in exchange for being
>   slightly faster on the rare ocassions when it triggered at the beginning
>   of the critical sections.
> - Minor changes here and there to address review comments.
>
> Changes since v3 [5]:
> - Added tags
> - Fix build issue for perf_event_v6
> - Don't disable preemption in pmu->enable()
> - Always rely on IPI_IRQ_WORK to run the queued work
> - Fixed typos + cleanups
>
> Changes since v2 [6]:
> - Rebased on recent linux-next (next-20190708)
> - Fixed a number of bugs with indices (reported by Wei)
> - Minor style fixes
>
> Changes since v1 [7]:
> - Rebased on v5.1-rc1
> - Pseudo-NMI has changed a lot since then, use the (now merged) NMI API
> - Remove locking from armv7 perf_event
> - Use locking only in armv6 perf_event
> - Use direct counter/type registers insted of selector register for armv8
>
> [1] http://www.linux-arm.org/git?p=linux-ae.git;a=shortlog;h=refs/heads/pmu-nmi-v6
> [2] http://www.linux-arm.org/git?p=linux-ae.git;a=shortlog;h=refs/heads/pmu-nmi-v6-nmi-fiq-clear-v2
> [3] https://www.spinics.net/lists/kernel/msg3554236.html
> [4] https://lists.infradead.org/pipermail/linux-arm-kernel/2019-July/666824.html
> [5] https://lists.infradead.org/pipermail/linux-arm-kernel/2019-July/665339.html
> [6] https://lists.infradead.org/pipermail/linux-arm-kernel/2019-March/640536.html
> [7] https://lists.infradead.org/pipermail/linux-arm-kernel/2018-January/554611.html
>
> Alexandru Elisei (1):
>   arm64: perf: Add missing ISB in armv8pmu_enable_event()
>
> Julien Thierry (5):
>   arm64: perf: Remove PMU locking
>   arm64: perf: Defer irq_work to IPI_IRQ_WORK
>   KVM: arm64: pmu: Make overflow handler NMI safe
>   arm_pmu: Introduce pmu_irq_ops
>   arm_pmu: arm64: Use NMIs for PMU
>
> Mark Rutland (1):
>   arm64: perf: Avoid PMXEV* indirection
>
>  arch/arm64/kernel/perf_event.c | 145 +++++++++++++++++++++------------
>  arch/arm64/kvm/pmu-emul.c      |  25 +++++-
>  drivers/perf/arm_pmu.c         | 142 +++++++++++++++++++++++++++-----
>  include/kvm/arm_pmu.h          |   1 +
>  4 files changed, 241 insertions(+), 72 deletions(-)
>
> --
> 2.28.0
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v6 5/7] KVM: arm64: pmu: Make overflow handler NMI safe
  2020-08-19 13:34 ` [PATCH v6 5/7] KVM: arm64: pmu: Make overflow handler NMI safe Alexandru Elisei
@ 2020-09-21 13:43   ` Will Deacon
  2020-09-21 15:45     ` Alexandru Elisei
  0 siblings, 1 reply; 19+ messages in thread
From: Will Deacon @ 2020-09-21 13:43 UTC (permalink / raw)
  To: Alexandru Elisei
  Cc: mark.rutland, sumit.garg, kvm, Julien Thierry, Marc Zyngier, maz,
	Suzuki K Pouloze, Will Deacon, linux-kernel, swboyd, James Morse,
	Julien Thierry, catalin.marinas, kvmarm, linux-arm-kernel

On Wed, Aug 19, 2020 at 02:34:17PM +0100, Alexandru Elisei wrote:
> From: Julien Thierry <julien.thierry@arm.com>
> 
> kvm_vcpu_kick() is not NMI safe. When the overflow handler is called from
> NMI context, defer waking the vcpu to an irq_work queue.
> 
> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Suzuki K Pouloze <suzuki.poulose@arm.com>
> Cc: kvm@vger.kernel.org
> Cc: kvmarm@lists.cs.columbia.edu
> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
> ---
>  arch/arm64/kvm/pmu-emul.c | 25 ++++++++++++++++++++++++-
>  include/kvm/arm_pmu.h     |  1 +
>  2 files changed, 25 insertions(+), 1 deletion(-)

I'd like an Ack from the KVM side on this one, but some minor comments
inline.

> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index f0d0312c0a55..30268397ed06 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -433,6 +433,22 @@ void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
>  	kvm_pmu_update_state(vcpu);
>  }
>  
> +/**
> + * When perf interrupt is an NMI, we cannot safely notify the vcpu corresponding
> + * to the event.
> + * This is why we need a callback to do it once outside of the NMI context.
> + */
> +static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work)
> +{
> +	struct kvm_vcpu *vcpu;
> +	struct kvm_pmu *pmu;
> +
> +	pmu = container_of(work, struct kvm_pmu, overflow_work);
> +	vcpu = kvm_pmc_to_vcpu(&pmu->pmc[0]);

Can you spell this kvm_pmc_to_vcpu(pmu->pmc); ?

> +
> +	kvm_vcpu_kick(vcpu);

How do we guarantee that the vCPU is still around by the time this runs?
Sorry to ask such a horrible question, but I don't see anything associating
the workqueue with the lifetime of the vCPU.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v6 2/7] arm64: perf: Avoid PMXEV* indirection
  2020-08-19 13:34 ` [PATCH v6 2/7] arm64: perf: Avoid PMXEV* indirection Alexandru Elisei
@ 2020-09-21 13:43   ` Will Deacon
  0 siblings, 0 replies; 19+ messages in thread
From: Will Deacon @ 2020-09-21 13:43 UTC (permalink / raw)
  To: Alexandru Elisei
  Cc: mark.rutland, sumit.garg, Julien Thierry, Peter Zijlstra, maz,
	Will Deacon, linux-kernel, Arnaldo Carvalho de Melo, swboyd,
	Alexander Shishkin, Ingo Molnar, Julien Thierry, catalin.marinas,
	Namhyung Kim, Jiri Olsa, linux-arm-kernel

On Wed, Aug 19, 2020 at 02:34:14PM +0100, Alexandru Elisei wrote:
> From: Mark Rutland <mark.rutland@arm.com>
> 
> Currently we access the counter registers and their respective type
> registers indirectly. This requires us to write to PMSELR, issue an ISB,
> then access the relevant PMXEV* registers.
> 
> This is unfortunate, because:
> 
> * Under virtualization, accessing one register requires two traps to
>   the hypervisor, even though we could access the register directly with
>   a single trap.
> 
> * We have to issue an ISB which we could otherwise avoid the cost of.
> 
> * When we use NMIs, the NMI handler will have to save/restore the select
>   register in case the code it preempted was attempting to access a
>   counter or its type register.
> 
> We can avoid these issues by directly accessing the relevant registers.
> This patch adds helpers to do so.
> 
> In armv8pmu_enable_event() we still need the ISB to prevent the PE from
> reordering the write to PMINTENSET_EL1 register. If the interrupt is
> enabled before we disable the counter and the new event is configured,
> we might get an interrupt triggered by the previously programmed event
> overflowing, but which we wrongly attribute to the event that we are
> enabling.
> 
> In the process, remove the comment that refers to the ARMv7 PMU.
> 
> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> [Julien T.: Don't inline read/write functions to avoid big code-size
> 	increase, remove unused read_pmevtypern function,
> 	fix counter index issue.]
> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
> [Removed comment, removed trailing semicolons in macros, added ISB]

nit: but it's customary to prefix these with your name, so it's easy to
figure out who made changes (like Julien did above).

(similar comment for other patches in this series)

> @@ -620,9 +686,14 @@ static void armv8pmu_enable_event(struct perf_event *event)
>  	 * Disable counter
>  	 */
>  	armv8pmu_disable_event_counter(event);
> +	/*
> +	 * Make sure the effects of disabling the counter are visible before we
> +	 * start configuring the event.
> +	 */
> +	isb();

With the isb() added by patch 1, why don't we just make these implicit
in armv8_{enable,disable}_event_counter() ?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v6 6/7] arm_pmu: Introduce pmu_irq_ops
  2020-08-19 13:34 ` [PATCH v6 6/7] arm_pmu: Introduce pmu_irq_ops Alexandru Elisei
@ 2020-09-21 13:55   ` Will Deacon
  2020-09-23 15:46     ` Alexandru Elisei
  0 siblings, 1 reply; 19+ messages in thread
From: Will Deacon @ 2020-09-21 13:55 UTC (permalink / raw)
  To: Alexandru Elisei
  Cc: mark.rutland, sumit.garg, Julien Thierry, maz, Will Deacon,
	linux-kernel, swboyd, Julien Thierry, catalin.marinas,
	linux-arm-kernel

On Wed, Aug 19, 2020 at 02:34:18PM +0100, Alexandru Elisei wrote:
> From: Julien Thierry <julien.thierry@arm.com>
> 
> Currently the PMU interrupt can either be a normal irq or a percpu irq.
> Supporting NMI will introduce two cases for each existing one. It becomes
> a mess of 'if's when managing the interrupt.
> 
> Define sets of callbacks for operations commonly done on the interrupt. The
> appropriate set of callbacks is selected at interrupt request time and
> simplifies interrupt enabling/disabling and freeing.
> 
> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
> ---
>  drivers/perf/arm_pmu.c | 86 ++++++++++++++++++++++++++++++++++--------
>  1 file changed, 70 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
> index df352b334ea7..17e5952d21e4 100644
> --- a/drivers/perf/arm_pmu.c
> +++ b/drivers/perf/arm_pmu.c
> @@ -26,8 +26,46 @@
>  
>  #include <asm/irq_regs.h>
>  
> +static int armpmu_count_irq_users(const int irq);
> +
> +struct pmu_irq_ops {
> +	void (*enable_pmuirq)(unsigned int irq);
> +	void (*disable_pmuirq)(unsigned int irq);
> +	void (*free_pmuirq)(unsigned int irq, int cpu, void __percpu *devid);
> +};
> +
> +static void armpmu_free_pmuirq(unsigned int irq, int cpu, void __percpu *devid)
> +{
> +	free_irq(irq, per_cpu_ptr(devid, cpu));
> +}
> +
> +static const struct pmu_irq_ops pmuirq_ops = {
> +	.enable_pmuirq = enable_irq,
> +	.disable_pmuirq = disable_irq_nosync,
> +	.free_pmuirq = armpmu_free_pmuirq
> +};
> +
> +static void armpmu_enable_percpu_pmuirq(unsigned int irq)
> +{
> +	enable_percpu_irq(irq, IRQ_TYPE_NONE);
> +}
> +
> +static void armpmu_free_percpu_pmuirq(unsigned int irq, int cpu,
> +				   void __percpu *devid)
> +{
> +	if (armpmu_count_irq_users(irq) == 1)
> +		free_percpu_irq(irq, devid);
> +}
> +
> +static const struct pmu_irq_ops percpu_pmuirq_ops = {
> +	.enable_pmuirq = armpmu_enable_percpu_pmuirq,
> +	.disable_pmuirq = disable_percpu_irq,
> +	.free_pmuirq = armpmu_free_percpu_pmuirq
> +};
> +
>  static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
>  static DEFINE_PER_CPU(int, cpu_irq);
> +static DEFINE_PER_CPU(const struct pmu_irq_ops *, cpu_irq_ops);

Would it make sense to put this in a structure alongside the irq?

>  
>  static inline u64 arm_pmu_event_max_period(struct perf_event *event)
>  {
> @@ -544,6 +582,19 @@ static int armpmu_count_irq_users(const int irq)
>  	return count;
>  }
>  
> +static const struct pmu_irq_ops *armpmu_find_irq_ops(int irq)
> +{
> +	int cpu;
> +
> +	for_each_possible_cpu(cpu) {
> +		if (per_cpu(cpu_irq, cpu) == irq
> +		    && per_cpu(cpu_irq_ops, cpu))
> +			return per_cpu(cpu_irq_ops, cpu);
> +	}

nit, but you could make this a bit more readable:

	struct pmu_irq_ops *ops = NULL;

	for_each_possible_cpu(cpu) {
		if (per_cpu(cpu_irq, cpu) != irq)
			continue;

		ops = per_cpu(cpu_irq_ops, cpu);
		if (ops)
			break;
	}

	return ops;

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt
  2020-08-19 13:34 [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
                   ` (7 preceding siblings ...)
  2020-09-04  8:58 ` [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt Sumit Garg
@ 2020-09-21 13:59 ` Will Deacon
  2020-09-21 15:41   ` Alexandru Elisei
  8 siblings, 1 reply; 19+ messages in thread
From: Will Deacon @ 2020-09-21 13:59 UTC (permalink / raw)
  To: Alexandru Elisei
  Cc: mark.rutland, sumit.garg, maz, linux-kernel, swboyd,
	catalin.marinas, linux-arm-kernel

On Wed, Aug 19, 2020 at 02:34:12PM +0100, Alexandru Elisei wrote:
> The series makes the arm_pmu driver use NMIs for the perf interrupt when
> NMIs are available on the platform (currently, only arm64 + GICv3). To make
> it easier to play with the patches, I've pushed a branch at [1]:

This mostly looks good to me, but see some of the comments I left on the
code. One other thing I'm not sure about is whether or not we should tell
userspace that we're using an NMI for the sampling. Do any other
architectures have a conditional NMI?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt
  2020-09-21 13:59 ` Will Deacon
@ 2020-09-21 15:41   ` Alexandru Elisei
  2020-09-21 17:53     ` Will Deacon
  2020-09-22 16:30     ` Alexandru Elisei
  0 siblings, 2 replies; 19+ messages in thread
From: Alexandru Elisei @ 2020-09-21 15:41 UTC (permalink / raw)
  To: Will Deacon
  Cc: mark.rutland, sumit.garg, maz, linux-kernel, swboyd,
	catalin.marinas, linux-arm-kernel

Hi Will,

Thank you so much for reviewing the series!

On 9/21/20 2:59 PM, Will Deacon wrote:
> On Wed, Aug 19, 2020 at 02:34:12PM +0100, Alexandru Elisei wrote:
>> The series makes the arm_pmu driver use NMIs for the perf interrupt when
>> NMIs are available on the platform (currently, only arm64 + GICv3). To make
>> it easier to play with the patches, I've pushed a branch at [1]:
> This mostly looks good to me, but see some of the comments I left on the
> code. One other thing I'm not sure about is whether or not we should tell
> userspace that we're using an NMI for the sampling. Do any other
> architectures have a conditional NMI?

I'm not sure about other architectures being able to configure the perf interrupt
as NMI or a regular interrupt, I'll try to find out. Regardless of what the other
architecture do, I am of the opinion that we should spell out explicitly when the
PMU is using pseudo-NMIs, because it makes a huge difference in the accuracy of
the instrumentation and the overall usefulness of perf.

If I spin a v7 quickly, is it still time to merge the series for 5.10?

Thanks,
Alex

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v6 5/7] KVM: arm64: pmu: Make overflow handler NMI safe
  2020-09-21 13:43   ` Will Deacon
@ 2020-09-21 15:45     ` Alexandru Elisei
  0 siblings, 0 replies; 19+ messages in thread
From: Alexandru Elisei @ 2020-09-21 15:45 UTC (permalink / raw)
  To: Will Deacon
  Cc: mark.rutland, sumit.garg, kvm, Julien Thierry, Marc Zyngier, maz,
	Suzuki K Pouloze, Will Deacon, linux-kernel, swboyd, James Morse,
	Julien Thierry, catalin.marinas, kvmarm, linux-arm-kernel

Hi Will,

On 9/21/20 2:43 PM, Will Deacon wrote:
> On Wed, Aug 19, 2020 at 02:34:17PM +0100, Alexandru Elisei wrote:
>> From: Julien Thierry <julien.thierry@arm.com>
>>
>> kvm_vcpu_kick() is not NMI safe. When the overflow handler is called from
>> NMI context, defer waking the vcpu to an irq_work queue.
>>
>> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: James Morse <james.morse@arm.com>
>> Cc: Suzuki K Pouloze <suzuki.poulose@arm.com>
>> Cc: kvm@vger.kernel.org
>> Cc: kvmarm@lists.cs.columbia.edu
>> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
>> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
>> ---
>>  arch/arm64/kvm/pmu-emul.c | 25 ++++++++++++++++++++++++-
>>  include/kvm/arm_pmu.h     |  1 +
>>  2 files changed, 25 insertions(+), 1 deletion(-)
> I'd like an Ack from the KVM side on this one, but some minor comments
> inline.
>
>> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
>> index f0d0312c0a55..30268397ed06 100644
>> --- a/arch/arm64/kvm/pmu-emul.c
>> +++ b/arch/arm64/kvm/pmu-emul.c
>> @@ -433,6 +433,22 @@ void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
>>  	kvm_pmu_update_state(vcpu);
>>  }
>>  
>> +/**
>> + * When perf interrupt is an NMI, we cannot safely notify the vcpu corresponding
>> + * to the event.
>> + * This is why we need a callback to do it once outside of the NMI context.
>> + */
>> +static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work)
>> +{
>> +	struct kvm_vcpu *vcpu;
>> +	struct kvm_pmu *pmu;
>> +
>> +	pmu = container_of(work, struct kvm_pmu, overflow_work);
>> +	vcpu = kvm_pmc_to_vcpu(&pmu->pmc[0]);
> Can you spell this kvm_pmc_to_vcpu(pmu->pmc); ?

Of course, that is much better.

>
>> +
>> +	kvm_vcpu_kick(vcpu);
> How do we guarantee that the vCPU is still around by the time this runs?
> Sorry to ask such a horrible question, but I don't see anything associating
> the workqueue with the lifetime of the vCPU.

That's a very nice catch, indeed the code doesn't guarantee that the VM is still
around when the work is executed. I will add an irq_work_sync() call to
kvm_pmu_vcpu_destroy() (which is called by kvm_vcpu_destroy() ->
kvm_arch_vcpu_destroy()), and to kvm_pmu_vcpu_reset(), similar to how x86 handles it.

Thanks,
Alex

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt
  2020-09-21 15:41   ` Alexandru Elisei
@ 2020-09-21 17:53     ` Will Deacon
  2020-09-22 16:30     ` Alexandru Elisei
  1 sibling, 0 replies; 19+ messages in thread
From: Will Deacon @ 2020-09-21 17:53 UTC (permalink / raw)
  To: Alexandru Elisei
  Cc: mark.rutland, sumit.garg, maz, linux-kernel, swboyd,
	catalin.marinas, linux-arm-kernel

On Mon, Sep 21, 2020 at 04:41:00PM +0100, Alexandru Elisei wrote:
> On 9/21/20 2:59 PM, Will Deacon wrote:
> > On Wed, Aug 19, 2020 at 02:34:12PM +0100, Alexandru Elisei wrote:
> >> The series makes the arm_pmu driver use NMIs for the perf interrupt when
> >> NMIs are available on the platform (currently, only arm64 + GICv3). To make
> >> it easier to play with the patches, I've pushed a branch at [1]:
> > This mostly looks good to me, but see some of the comments I left on the
> > code. One other thing I'm not sure about is whether or not we should tell
> > userspace that we're using an NMI for the sampling. Do any other
> > architectures have a conditional NMI?
> 
> I'm not sure about other architectures being able to configure the perf interrupt
> as NMI or a regular interrupt, I'll try to find out. Regardless of what the other
> architecture do, I am of the opinion that we should spell out explicitly when the
> PMU is using pseudo-NMIs, because it makes a huge difference in the accuracy of
> the instrumentation and the overall usefulness of perf.
> 
> If I spin a v7 quickly, is it still time to merge the series for 5.10?

I'm on holiday for the rest of the week, but please post something when you
have it and I'll queue it if I manage to get to it.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt
  2020-09-21 15:41   ` Alexandru Elisei
  2020-09-21 17:53     ` Will Deacon
@ 2020-09-22 16:30     ` Alexandru Elisei
  1 sibling, 0 replies; 19+ messages in thread
From: Alexandru Elisei @ 2020-09-22 16:30 UTC (permalink / raw)
  To: Will Deacon
  Cc: mark.rutland, sumit.garg, catalin.marinas, linux-kernel, swboyd,
	maz, linux-arm-kernel

Hi,

On 9/21/20 4:41 PM, Alexandru Elisei wrote:
> Hi Will,
>
> Thank you so much for reviewing the series!
>
> On 9/21/20 2:59 PM, Will Deacon wrote:
>> On Wed, Aug 19, 2020 at 02:34:12PM +0100, Alexandru Elisei wrote:
>>> The series makes the arm_pmu driver use NMIs for the perf interrupt when
>>> NMIs are available on the platform (currently, only arm64 + GICv3). To make
>>> it easier to play with the patches, I've pushed a branch at [1]:
>> This mostly looks good to me, but see some of the comments I left on the
>> code. One other thing I'm not sure about is whether or not we should tell
>> userspace that we're using an NMI for the sampling. Do any other
>> architectures have a conditional NMI?
> I'm not sure about other architectures being able to configure the perf interrupt
> as NMI or a regular interrupt, I'll try to find out. Regardless of what the other
> architecture do, I am of the opinion that we should spell out explicitly when the
> PMU is using pseudo-NMIs, because it makes a huge difference in the accuracy of
> the instrumentation and the overall usefulness of perf.

Coming back to this, looked at what other architectures are doing by grepping for
perf_pmu_register() and going from there, results below. I've found xtensa to
allow both regular IRQs and NMIs for PMU, based on a kernel config option (just
like arm64). However, the description for the config option states clearly the the
PMU IRQ will be an IRQ, while we don't have that for arm64 - the IRQ will be an
NMI automatically if the GIC is configured to use pseudo-NMIs. I think displaying
a message is the right thing to do, I'll do that for v7.

PMU IRQs for other architectures:

* alpha - PMU interrupt is always IRQ.
* arc - optional PMU interrupt; when present it's requested with
request_percpu_irq(); it prints to dmesg when overflow IRQ support has been detected.
* arm - no NMIs.
* c6x - seems like it doesn't have a PMU at all.
* csky - PMU interrupts is always IRQ.
* h8300 - seems like it doesn't have a PMU at all.
* hexagon - seems like it doesn't have a PMU at all.
* ia64 - perfmon interrupt is registered with register_percpu_irq(); it prints the
IRQ number.
* m64k - couldn't find anything resembling a PMU.
* microblaze - seems like it doesn't have a PMU.
* mips - regular IRQ; irq number and if it's shared with the timer interrupt is
printed.
* nds32 - regular IRQ; doesn't print anything regarding IRQ number.
* nios2 - seems like it doesn't have a PMU.
* openrisc - no PMU.
* parisc - no PMU IRQ, free-running counters?
* powerpc - no IRQ for IMC, hv_24x7 and hv_gpci PMUs; looks like for powerpc64,
the PMU interrupt is treated like an NMI if it is taken when interrupts are
"soft-masked", for powerpc32 it's always a regular interrupt; no information
displayed about the interrupt.
* riscv - they use regular IRQs only when multiplexing events; I haven't found any
information displayed about the PMU.
* s390 - no IRQ for cpum_cf_diag and cpm_cf; regular IRQ for cpum_sf; no dmesg output.
* sh - no IRQ.
* sparc - looks like it's always NMI; no information about IRQ is displayed.
* um - no PMU.
* x86 - the PMU interrupt is always a NMI, the lapic is configured to deliver the
PMI as an NMI (in arch/x86/events/core.c::perf_events_lapic_init()). Nothing about
interrupts printed in init_hw_perf_events();
* xtensa - the interrupt can be configurated as an NMI (EXTENSA_FAKE_NMI), but no
information about it is displayed.

Thanks,
Alex

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v6 6/7] arm_pmu: Introduce pmu_irq_ops
  2020-09-21 13:55   ` Will Deacon
@ 2020-09-23 15:46     ` Alexandru Elisei
  0 siblings, 0 replies; 19+ messages in thread
From: Alexandru Elisei @ 2020-09-23 15:46 UTC (permalink / raw)
  To: Will Deacon
  Cc: mark.rutland, sumit.garg, Julien Thierry, maz, Will Deacon,
	linux-kernel, swboyd, Julien Thierry, catalin.marinas,
	linux-arm-kernel

Hi Will,

On 9/21/20 2:55 PM, Will Deacon wrote:
> On Wed, Aug 19, 2020 at 02:34:18PM +0100, Alexandru Elisei wrote:
>> From: Julien Thierry <julien.thierry@arm.com>
>>
>> Currently the PMU interrupt can either be a normal irq or a percpu irq.
>> Supporting NMI will introduce two cases for each existing one. It becomes
>> a mess of 'if's when managing the interrupt.
>>
>> Define sets of callbacks for operations commonly done on the interrupt. The
>> appropriate set of callbacks is selected at interrupt request time and
>> simplifies interrupt enabling/disabling and freeing.
>>
>> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
>> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
>> ---
>>  drivers/perf/arm_pmu.c | 86 ++++++++++++++++++++++++++++++++++--------
>>  1 file changed, 70 insertions(+), 16 deletions(-)
>>
>> diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
>> index df352b334ea7..17e5952d21e4 100644
>> --- a/drivers/perf/arm_pmu.c
>> +++ b/drivers/perf/arm_pmu.c
>> @@ -26,8 +26,46 @@
>>  
>>  #include <asm/irq_regs.h>
>>  
>> +static int armpmu_count_irq_users(const int irq);
>> +
>> +struct pmu_irq_ops {
>> +	void (*enable_pmuirq)(unsigned int irq);
>> +	void (*disable_pmuirq)(unsigned int irq);
>> +	void (*free_pmuirq)(unsigned int irq, int cpu, void __percpu *devid);
>> +};
>> +
>> +static void armpmu_free_pmuirq(unsigned int irq, int cpu, void __percpu *devid)
>> +{
>> +	free_irq(irq, per_cpu_ptr(devid, cpu));
>> +}
>> +
>> +static const struct pmu_irq_ops pmuirq_ops = {
>> +	.enable_pmuirq = enable_irq,
>> +	.disable_pmuirq = disable_irq_nosync,
>> +	.free_pmuirq = armpmu_free_pmuirq
>> +};
>> +
>> +static void armpmu_enable_percpu_pmuirq(unsigned int irq)
>> +{
>> +	enable_percpu_irq(irq, IRQ_TYPE_NONE);
>> +}
>> +
>> +static void armpmu_free_percpu_pmuirq(unsigned int irq, int cpu,
>> +				   void __percpu *devid)
>> +{
>> +	if (armpmu_count_irq_users(irq) == 1)
>> +		free_percpu_irq(irq, devid);
>> +}
>> +
>> +static const struct pmu_irq_ops percpu_pmuirq_ops = {
>> +	.enable_pmuirq = armpmu_enable_percpu_pmuirq,
>> +	.disable_pmuirq = disable_percpu_irq,
>> +	.free_pmuirq = armpmu_free_percpu_pmuirq
>> +};
>> +
>>  static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
>>  static DEFINE_PER_CPU(int, cpu_irq);
>> +static DEFINE_PER_CPU(const struct pmu_irq_ops *, cpu_irq_ops);
> Would it make sense to put this in a structure alongside the irq?
It doesn't really work, because we need the irq number to be percpu for
armpmu_free_irq() to work correctly. If we have a percpu pointer to a struct, the
first cpu that frees the irq will set it to 0, and all subsequent CPUs that share
the same struct will read 0 as the irq number, which will trigger the WARN_ON and
then return early.
>
>>  
>>  static inline u64 arm_pmu_event_max_period(struct perf_event *event)
>>  {
>> @@ -544,6 +582,19 @@ static int armpmu_count_irq_users(const int irq)
>>  	return count;
>>  }
>>  
>> +static const struct pmu_irq_ops *armpmu_find_irq_ops(int irq)
>> +{
>> +	int cpu;
>> +
>> +	for_each_possible_cpu(cpu) {
>> +		if (per_cpu(cpu_irq, cpu) == irq
>> +		    && per_cpu(cpu_irq_ops, cpu))
>> +			return per_cpu(cpu_irq_ops, cpu);
>> +	}
> nit, but you could make this a bit more readable:
>
> 	struct pmu_irq_ops *ops = NULL;
>
> 	for_each_possible_cpu(cpu) {
> 		if (per_cpu(cpu_irq, cpu) != irq)
> 			continue;
>
> 		ops = per_cpu(cpu_irq_ops, cpu);
> 		if (ops)
> 			break;
> 	}
>
> 	return ops;

That looks better, I will change it.

Thanks,
Alex

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2020-09-23 15:46 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-19 13:34 [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
2020-08-19 13:34 ` [PATCH v6 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_event() Alexandru Elisei
2020-08-19 13:34 ` [PATCH v6 2/7] arm64: perf: Avoid PMXEV* indirection Alexandru Elisei
2020-09-21 13:43   ` Will Deacon
2020-08-19 13:34 ` [PATCH v6 3/7] arm64: perf: Remove PMU locking Alexandru Elisei
2020-08-19 13:34 ` [PATCH v6 4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK Alexandru Elisei
2020-08-19 14:23   ` peterz
2020-08-19 13:34 ` [PATCH v6 5/7] KVM: arm64: pmu: Make overflow handler NMI safe Alexandru Elisei
2020-09-21 13:43   ` Will Deacon
2020-09-21 15:45     ` Alexandru Elisei
2020-08-19 13:34 ` [PATCH v6 6/7] arm_pmu: Introduce pmu_irq_ops Alexandru Elisei
2020-09-21 13:55   ` Will Deacon
2020-09-23 15:46     ` Alexandru Elisei
2020-08-19 13:34 ` [PATCH v6 7/7] arm_pmu: arm64: Use NMIs for PMU Alexandru Elisei
2020-09-04  8:58 ` [PATCH v6 0/7] arm_pmu: Use NMI for perf interrupt Sumit Garg
2020-09-21 13:59 ` Will Deacon
2020-09-21 15:41   ` Alexandru Elisei
2020-09-21 17:53     ` Will Deacon
2020-09-22 16:30     ` Alexandru Elisei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).