linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 0/7] arm_pmu: Use NMI for perf interrupt
@ 2020-09-24 11:06 Alexandru Elisei
  2020-09-24 11:07 ` [PATCH v7 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_counter() Alexandru Elisei
                   ` (7 more replies)
  0 siblings, 8 replies; 13+ messages in thread
From: Alexandru Elisei @ 2020-09-24 11:06 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, sumit.garg, maz, swboyd, catalin.marinas, will

The series changes the arm_pmu driver to use NMIs for the perf interrupt
when NMIs are available on the platform (currently, only arm64 + GICv3). To
make it easier to play with the patches, I've pushed a branch at [1]:

$ git clone -b pmu-nmi-v7 git://linux-arm.org/linux-ae

The changes from v6 were minor, but I've still run the same tests on an
espressobin v7*. These are the results of running perf record -a -- sleep
60 (all results show kernel symbols with overhead >= 1%):

1. Without the patches:

    11.14%  [k] _raw_spin_unlock_irq
     9.09%  [k] _raw_spin_unlock_irqrestore
     8.55%  [k] arch_cpu_idle
     2.60%  [k] __softirqentry_text_start
     1.13%  [k] arch_counter_get_cntpct
     1.13%  [k] tick_nohz_idle_exit
     1.09%  [k] el0_svc_common.constprop.0
    [..]

2. Using NMIs:

     9.46%  [k] arch_counter_get_cntpct
     5.59%  [k] wait_for_xmitr
     3.78%  [k] ktime_get
     3.63%  [k] __delay
     1.54%  [k] _raw_write_lock_irqsave
     1.29%  [k] arch_timer_evtstrm_available
     1.24%  [k] _raw_spin_lock_irqsave
     1.11%  [k] refresh_cpu_vm_stats
     1.06%  [k] __schedule
     1.03%  [k] _find_next_bit.constprop.0
     [..]

When running perf record -a -- iperf3 -c 127.0.0.1 -t 60:

1. Without the patches:

    25.46%  [k] __arch_copy_from_user
    24.06%  [k] __arch_copy_to_user
     4.17%  [k] _raw_spin_unlock_irq
     2.52%  [k] _raw_spin_unlock_irqrestore
     1.90%  [k] __free_pages_ok
     1.22%  [k] get_page_from_freelist
     1.21%  [k] arch_cpu_idle
     1.19%  [k] tcp_ack
    [..]

2. Using NMIs:

    23.94%  [k] __arch_copy_from_user
    21.65%  [k] __arch_copy_to_user
     1.19%  [k] tcp_ack
    [..]

I've run the same tests in a VM when both host+guest use NMIs, and when
they don't. All of these tests were also ran on the model. Similar results
in all cases.

* All the firmware versions for espressobin v7 that I've tried clear
SCR_EL3.FIQ, which means that NMIs don't work with the upstream kernel.  To
test the series, I've applied the patches from irq/irqchip-next to enable
pseudo-NMIs when SCR_EL3.FIQ == 0. I've also pushed a branch [2] with the
patches applied on top of this series:

$ git clone -b pmu-nmi-v7-nmi-fiq-clear-v3 git://linux-arm.org/linux-ae

Summary of the patches:
* Patch 1 is a fix for a bug that Julien found during the review for v4.
* Patches 2 and 3 remove locking from arm64 perf event code.
* Patches 4 and 5 makes the arm64 PMU interrupt handler NMI safe.
* Patches 6 and 7 enable the use of NMIs on arm64 with a GICv3 irqchip.

Changes since v6 [3]:
- Rebased on top of v5.9-rc6
- Moved the ISBs to armv8pmu_{enable, disable}_counter()
- Added irq_work_sync() to kvm_pmu_vcpu_destroy()
- Reworked armpmu_find_irq_ops() as per Will's suggestion
- Print message when NMIs are used by the PMU
- The changes were minor, so I've added the Tested-by tag for v6 from Sumit
  Garg. Many thanks for the testing!

Changes since v5 [4]:
- Rebased on top of v5.9-rc1.
- Typo fixes.
- Added comments to the ISB added by patches #1 and #2.
- Reworded message for patch #4, as per Mark's excellent suggestion.

Changes since v4 [5]:
- Rebased on top of v5.8-rc1 and dropped the Tested-by tags because it's
  been almost a year since the series has been tested.
- Dropped patch 3 because I couldn't find any instance where
  armv7pmu_read_counter() was called with interrupts enabled. I've also
  tested this by running several instances of perf for a few hours, and the
  function was called every time with interrupts disabled.
- Dropped patches 4 and 5 because the tradeoff wasn't worth it in my
  opinion: the irq handler was slower all the time (because it
  saved/restored the counter select register), in exchange for being
  slightly faster on the rare ocassions when it triggered at the beginning
  of the critical sections.
- Minor changes here and there to address review comments.

Changes since v3 [6]:
- Added tags
- Fix build issue for perf_event_v6
- Don't disable preemption in pmu->enable()
- Always rely on IPI_IRQ_WORK to run the queued work
- Fixed typos + cleanups

Changes since v2 [7]:
- Rebased on recent linux-next (next-20190708)
- Fixed a number of bugs with indices (reported by Wei)
- Minor style fixes

Changes since v1 [8]:
- Rebased on v5.1-rc1
- Pseudo-NMI has changed a lot since then, use the (now merged) NMI API
- Remove locking from armv7 perf_event
- Use locking only in armv6 perf_event
- Use direct counter/type registers insted of selector register for armv8

[1] http://www.linux-arm.org/git?p=linux-ae.git;a=shortlog;h=refs/heads/pmu-nmi-v7
[2] http://www.linux-arm.org/git?p=linux-ae.git;a=shortlog;h=refs/heads/pmu-nmi-v7-nmi-fiq-clear-v3
[3] https://lkml.org/lkml/2020/8/19/671
[4] https://www.spinics.net/lists/kernel/msg3554236.html
[5] https://lists.infradead.org/pipermail/linux-arm-kernel/2019-July/666824.html
[6] https://lists.infradead.org/pipermail/linux-arm-kernel/2019-July/665339.html
[7] https://lists.infradead.org/pipermail/linux-arm-kernel/2019-March/640536.html
[8] https://lists.infradead.org/pipermail/linux-arm-kernel/2018-January/554611.html

Alexandru Elisei (1):
  arm64: perf: Add missing ISB in armv8pmu_enable_counter()

Julien Thierry (5):
  arm64: perf: Remove PMU locking
  arm64: perf: Defer irq_work to IPI_IRQ_WORK
  KVM: arm64: pmu: Make overflow handler NMI safe
  arm_pmu: Introduce pmu_irq_ops
  arm_pmu: arm64: Use NMIs for PMU

Mark Rutland (1):
  arm64: perf: Avoid PMXEV* indirection

 arch/arm64/kernel/perf_event.c | 146 ++++++++++++++++++++-----------
 arch/arm64/kvm/pmu-emul.c      |  26 +++++-
 drivers/perf/arm_pmu.c         | 155 ++++++++++++++++++++++++++++-----
 include/kvm/arm_pmu.h          |   1 +
 4 files changed, 255 insertions(+), 73 deletions(-)

-- 
2.28.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v7 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_counter()
  2020-09-24 11:06 [PATCH v7 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
@ 2020-09-24 11:07 ` Alexandru Elisei
  2020-09-24 11:07 ` [PATCH v7 2/7] arm64: perf: Avoid PMXEV* indirection Alexandru Elisei
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Alexandru Elisei @ 2020-09-24 11:07 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, sumit.garg, maz, swboyd, catalin.marinas, will,
	Julien Thierry, Will Deacon, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Alexander Shishkin, Jiri Olsa,
	Namhyung Kim, Julien Thierry

Writes to the PMXEVTYPER_EL0 register are not self-synchronising. In
armv8pmu_enable_event(), the PE can reorder configuring the event type
after we have enabled the counter and the interrupt. This can lead to an
interrupt being asserted because of the previous event type that we were
counting using the same counter, not the one that we've just configured.

The same rationale applies to writes to the PMINTENSET_EL1 register. The PE
can reorder enabling the interrupt at any point in the future after we have
enabled the event.

Prevent both situations from happening by adding an ISB just before we
enable the event counter.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Fixes: 030896885ade ("arm64: Performance counters support")
Reported-by: Julien Thierry <julien.thierry@arm.com>
Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kernel/perf_event.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 462f9a9cc44b..481d48e3872b 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -532,6 +532,11 @@ static u32 armv8pmu_event_cnten_mask(struct perf_event *event)
 
 static inline void armv8pmu_enable_counter(u32 mask)
 {
+	/*
+	 * Make sure event configuration register writes are visible before we
+	 * enable the counter.
+	 * */
+	isb();
 	write_sysreg(mask, pmcntenset_el0);
 }
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 2/7] arm64: perf: Avoid PMXEV* indirection
  2020-09-24 11:06 [PATCH v7 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
  2020-09-24 11:07 ` [PATCH v7 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_counter() Alexandru Elisei
@ 2020-09-24 11:07 ` Alexandru Elisei
  2020-09-24 11:07 ` [PATCH v7 3/7] arm64: perf: Remove PMU locking Alexandru Elisei
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Alexandru Elisei @ 2020-09-24 11:07 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, sumit.garg, maz, swboyd, catalin.marinas, will,
	Julien Thierry, Will Deacon, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Alexander Shishkin, Jiri Olsa,
	Namhyung Kim, Julien Thierry

From: Mark Rutland <mark.rutland@arm.com>

Currently we access the counter registers and their respective type
registers indirectly. This requires us to write to PMSELR, issue an ISB,
then access the relevant PMXEV* registers.

This is unfortunate, because:

* Under virtualization, accessing one register requires two traps to
  the hypervisor, even though we could access the register directly with
  a single trap.

* We have to issue an ISB which we could otherwise avoid the cost of.

* When we use NMIs, the NMI handler will have to save/restore the select
  register in case the code it preempted was attempting to access a
  counter or its type register.

We can avoid these issues by directly accessing the relevant registers.
This patch adds helpers to do so.

In armv8pmu_enable_event() we still need the ISB to prevent the PE from
reordering the write to PMINTENSET_EL1 register. If the interrupt is
enabled before we disable the counter and the new event is configured,
we might get an interrupt triggered by the previously programmed event
overflowing, but which we wrongly attribute to the event that we are
enabling. Execute an ISB after we disable the counter.

In the process, remove the comment that refers to the ARMv7 PMU.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[Julien T.: Don't inline read/write functions to avoid big code-size
	increase, remove unused read_pmevtypern function,
	fix counter index issue.]
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
[Alexandru E.: Removed comment, removed trailing semicolons in macros,
	added ISB]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kernel/perf_event.c | 99 +++++++++++++++++++++++++++++-----
 1 file changed, 85 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 481d48e3872b..cb0b85bebc41 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -348,6 +348,73 @@ static inline bool armv8pmu_event_is_chained(struct perf_event *event)
 #define	ARMV8_IDX_TO_COUNTER(x)	\
 	(((x) - ARMV8_IDX_COUNTER0) & ARMV8_PMU_COUNTER_MASK)
 
+/*
+ * This code is really good
+ */
+
+#define PMEVN_CASE(n, case_macro) \
+	case n: case_macro(n); break
+
+#define PMEVN_SWITCH(x, case_macro)				\
+	do {							\
+		switch (x) {					\
+		PMEVN_CASE(0,  case_macro);			\
+		PMEVN_CASE(1,  case_macro);			\
+		PMEVN_CASE(2,  case_macro);			\
+		PMEVN_CASE(3,  case_macro);			\
+		PMEVN_CASE(4,  case_macro);			\
+		PMEVN_CASE(5,  case_macro);			\
+		PMEVN_CASE(6,  case_macro);			\
+		PMEVN_CASE(7,  case_macro);			\
+		PMEVN_CASE(8,  case_macro);			\
+		PMEVN_CASE(9,  case_macro);			\
+		PMEVN_CASE(10, case_macro);			\
+		PMEVN_CASE(11, case_macro);			\
+		PMEVN_CASE(12, case_macro);			\
+		PMEVN_CASE(13, case_macro);			\
+		PMEVN_CASE(14, case_macro);			\
+		PMEVN_CASE(15, case_macro);			\
+		PMEVN_CASE(16, case_macro);			\
+		PMEVN_CASE(17, case_macro);			\
+		PMEVN_CASE(18, case_macro);			\
+		PMEVN_CASE(19, case_macro);			\
+		PMEVN_CASE(20, case_macro);			\
+		PMEVN_CASE(21, case_macro);			\
+		PMEVN_CASE(22, case_macro);			\
+		PMEVN_CASE(23, case_macro);			\
+		PMEVN_CASE(24, case_macro);			\
+		PMEVN_CASE(25, case_macro);			\
+		PMEVN_CASE(26, case_macro);			\
+		PMEVN_CASE(27, case_macro);			\
+		PMEVN_CASE(28, case_macro);			\
+		PMEVN_CASE(29, case_macro);			\
+		PMEVN_CASE(30, case_macro);			\
+		default: WARN(1, "Invalid PMEV* index\n");	\
+		}						\
+	} while (0)
+
+#define RETURN_READ_PMEVCNTRN(n) \
+	return read_sysreg(pmevcntr##n##_el0)
+static unsigned long read_pmevcntrn(int n)
+{
+	PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
+	return 0;
+}
+
+#define WRITE_PMEVCNTRN(n) \
+	write_sysreg(val, pmevcntr##n##_el0)
+static void write_pmevcntrn(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
+}
+
+#define WRITE_PMEVTYPERN(n) \
+	write_sysreg(val, pmevtyper##n##_el0)
+static void write_pmevtypern(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
+}
+
 static inline u32 armv8pmu_pmcr_read(void)
 {
 	return read_sysreg(pmcr_el0);
@@ -376,17 +443,11 @@ static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx)
 	return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx));
 }
 
-static inline void armv8pmu_select_counter(int idx)
+static inline u32 armv8pmu_read_evcntr(int idx)
 {
 	u32 counter = ARMV8_IDX_TO_COUNTER(idx);
-	write_sysreg(counter, pmselr_el0);
-	isb();
-}
 
-static inline u64 armv8pmu_read_evcntr(int idx)
-{
-	armv8pmu_select_counter(idx);
-	return read_sysreg(pmxevcntr_el0);
+	return read_pmevcntrn(counter);
 }
 
 static inline u64 armv8pmu_read_hw_counter(struct perf_event *event)
@@ -458,8 +519,9 @@ static u64 armv8pmu_read_counter(struct perf_event *event)
 
 static inline void armv8pmu_write_evcntr(int idx, u64 value)
 {
-	armv8pmu_select_counter(idx);
-	write_sysreg(value, pmxevcntr_el0);
+	u32 counter = ARMV8_IDX_TO_COUNTER(idx);
+
+	write_pmevcntrn(counter, value);
 }
 
 static inline void armv8pmu_write_hw_counter(struct perf_event *event,
@@ -494,9 +556,10 @@ static void armv8pmu_write_counter(struct perf_event *event, u64 value)
 
 static inline void armv8pmu_write_evtype(int idx, u32 val)
 {
-	armv8pmu_select_counter(idx);
+	u32 counter = ARMV8_IDX_TO_COUNTER(idx);
+
 	val &= ARMV8_PMU_EVTYPE_MASK;
-	write_sysreg(val, pmxevtyper_el0);
+	write_pmevtypern(counter, val);
 }
 
 static inline void armv8pmu_write_event_type(struct perf_event *event)
@@ -516,7 +579,10 @@ static inline void armv8pmu_write_event_type(struct perf_event *event)
 		armv8pmu_write_evtype(idx - 1, hwc->config_base);
 		armv8pmu_write_evtype(idx, chain_evt);
 	} else {
-		armv8pmu_write_evtype(idx, hwc->config_base);
+		if (idx == ARMV8_IDX_CYCLE_COUNTER)
+			write_sysreg(hwc->config_base, pmccfiltr_el0);
+		else
+			armv8pmu_write_evtype(idx, hwc->config_base);
 	}
 }
 
@@ -555,6 +621,11 @@ static inline void armv8pmu_enable_event_counter(struct perf_event *event)
 static inline void armv8pmu_disable_counter(u32 mask)
 {
 	write_sysreg(mask, pmcntenclr_el0);
+	/*
+	 * Make sure the effects of disabling the counter are visible before we
+	 * start configuring the event.
+	 */
+	isb();
 }
 
 static inline void armv8pmu_disable_event_counter(struct perf_event *event)
@@ -627,7 +698,7 @@ static void armv8pmu_enable_event(struct perf_event *event)
 	armv8pmu_disable_event_counter(event);
 
 	/*
-	 * Set event (if destined for PMNx counters).
+	 * Set event.
 	 */
 	armv8pmu_write_event_type(event);
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 3/7] arm64: perf: Remove PMU locking
  2020-09-24 11:06 [PATCH v7 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
  2020-09-24 11:07 ` [PATCH v7 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_counter() Alexandru Elisei
  2020-09-24 11:07 ` [PATCH v7 2/7] arm64: perf: Avoid PMXEV* indirection Alexandru Elisei
@ 2020-09-24 11:07 ` Alexandru Elisei
  2020-09-24 11:07 ` [PATCH v7 4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK Alexandru Elisei
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Alexandru Elisei @ 2020-09-24 11:07 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, sumit.garg, maz, swboyd, catalin.marinas, will,
	Julien Thierry, Will Deacon, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Alexander Shishkin, Jiri Olsa,
	Namhyung Kim

From: Julien Thierry <julien.thierry@arm.com>

The PMU is disabled and enabled, and the counters are programmed from
contexts where interrupts or preemption is disabled.

The functions to toggle the PMU and to program the PMU counters access the
registers directly and don't access data modified by the interrupt handler.
That, and the fact that they're always called from non-preemptible
contexts, means that we don't need to disable interrupts or use a spinlock.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
[Alexandru E.: Explained why locking is not needed, removed WARN_ONs]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kernel/perf_event.c | 28 ----------------------------
 1 file changed, 28 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index cb0b85bebc41..610500166f90 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -682,15 +682,10 @@ static inline u32 armv8pmu_getreset_flags(void)
 
 static void armv8pmu_enable_event(struct perf_event *event)
 {
-	unsigned long flags;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
 	/*
 	 * Enable counter and interrupt, and set the counter to count
 	 * the event that we're interested in.
 	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
 
 	/*
 	 * Disable counter
@@ -711,21 +706,10 @@ static void armv8pmu_enable_event(struct perf_event *event)
 	 * Enable counter
 	 */
 	armv8pmu_enable_event_counter(event);
-
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
 static void armv8pmu_disable_event(struct perf_event *event)
 {
-	unsigned long flags;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	/*
-	 * Disable counter and interrupt
-	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
-
 	/*
 	 * Disable counter
 	 */
@@ -735,30 +719,18 @@ static void armv8pmu_disable_event(struct perf_event *event)
 	 * Disable interrupt for this counter
 	 */
 	armv8pmu_disable_event_irq(event);
-
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
 static void armv8pmu_start(struct arm_pmu *cpu_pmu)
 {
-	unsigned long flags;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
 	/* Enable all counters */
 	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
 static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
 {
-	unsigned long flags;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
 	/* Disable all counters */
 	armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
 static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK
  2020-09-24 11:06 [PATCH v7 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
                   ` (2 preceding siblings ...)
  2020-09-24 11:07 ` [PATCH v7 3/7] arm64: perf: Remove PMU locking Alexandru Elisei
@ 2020-09-24 11:07 ` Alexandru Elisei
  2020-09-24 11:07 ` [PATCH v7 5/7] KVM: arm64: pmu: Make overflow handler NMI safe Alexandru Elisei
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Alexandru Elisei @ 2020-09-24 11:07 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, sumit.garg, maz, swboyd, catalin.marinas, will,
	Julien Thierry, Julien Thierry, Will Deacon, Peter Zijlstra,
	Ingo Molnar, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim

From: Julien Thierry <julien.thierry@arm.com>

When handling events, armv8pmu_handle_irq() calls perf_event_overflow(),
and subsequently calls irq_work_run() to handle any work queued by
perf_event_overflow(). As perf_event_overflow() raises IPI_IRQ_WORK when
queuing the work, this isn't strictly necessary and the work could be
handled as part of the IPI_IRQ_WORK handler.

In the common case the IPI handler will run immediately after the PMU IRQ
handler, and where the PE is heavily loaded with interrupts other handlers
may run first, widening the window where some counters are disabled.

In practice this window is unlikely to be a significant issue, and removing
the call to irq_work_run() would make the PMU IRQ handler NMI safe in
addition to making it simpler, so let's do that.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[Alexandru E.: Reworded commit message]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kernel/perf_event.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 610500166f90..ef206fb146b9 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -783,20 +783,16 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
 		if (!armpmu_event_set_period(event))
 			continue;
 
+		/*
+		 * Perf event overflow will queue the processing of the event as
+		 * an irq_work which will be taken care of in the handling of
+		 * IPI_IRQ_WORK.
+		 */
 		if (perf_event_overflow(event, &data, regs))
 			cpu_pmu->disable(event);
 	}
 	armv8pmu_start(cpu_pmu);
 
-	/*
-	 * Handle the pending perf events.
-	 *
-	 * Note: this call *must* be run with interrupts disabled. For
-	 * platforms that can have the PMU interrupts raised as an NMI, this
-	 * will not work.
-	 */
-	irq_work_run();
-
 	return IRQ_HANDLED;
 }
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 5/7] KVM: arm64: pmu: Make overflow handler NMI safe
  2020-09-24 11:06 [PATCH v7 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
                   ` (3 preceding siblings ...)
  2020-09-24 11:07 ` [PATCH v7 4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK Alexandru Elisei
@ 2020-09-24 11:07 ` Alexandru Elisei
  2020-09-28 17:57   ` Will Deacon
  2020-09-29  8:11   ` Marc Zyngier
  2020-09-24 11:07 ` [PATCH v7 6/7] arm_pmu: Introduce pmu_irq_ops Alexandru Elisei
                   ` (2 subsequent siblings)
  7 siblings, 2 replies; 13+ messages in thread
From: Alexandru Elisei @ 2020-09-24 11:07 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, sumit.garg, maz, swboyd, catalin.marinas, will,
	Julien Thierry, Julien Thierry, Marc Zyngier, Will Deacon,
	James Morse, Suzuki K Pouloze, kvm, kvmarm

From: Julien Thierry <julien.thierry@arm.com>

kvm_vcpu_kick() is not NMI safe. When the overflow handler is called from
NMI context, defer waking the vcpu to an irq_work queue.

A vcpu can be freed while it's not running by kvm_destroy_vm(). Prevent
running the irq_work for a non-existent vcpu by calling irq_work_sync() on
the PMU destroy path.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Pouloze <suzuki.poulose@arm.com>
Cc: kvm@vger.kernel.org
Cc: kvmarm@lists.cs.columbia.edu
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
[Alexandru E.: Added irq_work_sync()]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
I suggested in v6 that I will add an irq_work_sync() to
kvm_pmu_vcpu_reset(). It turns out it's not necessary: a vcpu reset is done
by the vcpu being reset with interrupts enabled, which means all the work
has had a chance to run before the reset takes place.

 arch/arm64/kvm/pmu-emul.c | 26 +++++++++++++++++++++++++-
 include/kvm/arm_pmu.h     |  1 +
 2 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index f0d0312c0a55..81916e360b1e 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -269,6 +269,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
 
 	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++)
 		kvm_pmu_release_perf_event(&pmu->pmc[i]);
+	irq_work_sync(&vcpu->arch.pmu.overflow_work);
 }
 
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
@@ -433,6 +434,22 @@ void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
 	kvm_pmu_update_state(vcpu);
 }
 
+/**
+ * When perf interrupt is an NMI, we cannot safely notify the vcpu corresponding
+ * to the event.
+ * This is why we need a callback to do it once outside of the NMI context.
+ */
+static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work)
+{
+	struct kvm_vcpu *vcpu;
+	struct kvm_pmu *pmu;
+
+	pmu = container_of(work, struct kvm_pmu, overflow_work);
+	vcpu = kvm_pmc_to_vcpu(pmu->pmc);
+
+	kvm_vcpu_kick(vcpu);
+}
+
 /**
  * When the perf event overflows, set the overflow status and inform the vcpu.
  */
@@ -465,7 +482,11 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
 
 	if (kvm_pmu_overflow_status(vcpu)) {
 		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
-		kvm_vcpu_kick(vcpu);
+
+		if (!in_nmi())
+			kvm_vcpu_kick(vcpu);
+		else
+			irq_work_queue(&vcpu->arch.pmu.overflow_work);
 	}
 
 	cpu_pmu->pmu.start(perf_event, PERF_EF_RELOAD);
@@ -764,6 +785,9 @@ static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu)
 			return ret;
 	}
 
+	init_irq_work(&vcpu->arch.pmu.overflow_work,
+		      kvm_pmu_perf_overflow_notify_vcpu);
+
 	vcpu->arch.pmu.created = true;
 	return 0;
 }
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 6db030439e29..dbf4f08d42e5 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -27,6 +27,7 @@ struct kvm_pmu {
 	bool ready;
 	bool created;
 	bool irq_level;
+	struct irq_work overflow_work;
 };
 
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 6/7] arm_pmu: Introduce pmu_irq_ops
  2020-09-24 11:06 [PATCH v7 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
                   ` (4 preceding siblings ...)
  2020-09-24 11:07 ` [PATCH v7 5/7] KVM: arm64: pmu: Make overflow handler NMI safe Alexandru Elisei
@ 2020-09-24 11:07 ` Alexandru Elisei
  2020-09-24 11:07 ` [PATCH v7 7/7] arm_pmu: arm64: Use NMIs for PMU Alexandru Elisei
  2020-09-28 22:13 ` [PATCH v7 0/7] arm_pmu: Use NMI for perf interrupt Will Deacon
  7 siblings, 0 replies; 13+ messages in thread
From: Alexandru Elisei @ 2020-09-24 11:07 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, sumit.garg, maz, swboyd, catalin.marinas, will,
	Julien Thierry, Julien Thierry, Will Deacon

From: Julien Thierry <julien.thierry@arm.com>

Currently the PMU interrupt can either be a normal irq or a percpu irq.
Supporting NMI will introduce two cases for each existing one. It becomes
a mess of 'if's when managing the interrupt.

Define sets of callbacks for operations commonly done on the interrupt. The
appropriate set of callbacks is selected at interrupt request time and
simplifies interrupt enabling/disabling and freeing.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 drivers/perf/arm_pmu.c | 90 ++++++++++++++++++++++++++++++++++--------
 1 file changed, 74 insertions(+), 16 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index df352b334ea7..a770726e98d4 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -26,8 +26,46 @@
 
 #include <asm/irq_regs.h>
 
+static int armpmu_count_irq_users(const int irq);
+
+struct pmu_irq_ops {
+	void (*enable_pmuirq)(unsigned int irq);
+	void (*disable_pmuirq)(unsigned int irq);
+	void (*free_pmuirq)(unsigned int irq, int cpu, void __percpu *devid);
+};
+
+static void armpmu_free_pmuirq(unsigned int irq, int cpu, void __percpu *devid)
+{
+	free_irq(irq, per_cpu_ptr(devid, cpu));
+}
+
+static const struct pmu_irq_ops pmuirq_ops = {
+	.enable_pmuirq = enable_irq,
+	.disable_pmuirq = disable_irq_nosync,
+	.free_pmuirq = armpmu_free_pmuirq
+};
+
+static void armpmu_enable_percpu_pmuirq(unsigned int irq)
+{
+	enable_percpu_irq(irq, IRQ_TYPE_NONE);
+}
+
+static void armpmu_free_percpu_pmuirq(unsigned int irq, int cpu,
+				   void __percpu *devid)
+{
+	if (armpmu_count_irq_users(irq) == 1)
+		free_percpu_irq(irq, devid);
+}
+
+static const struct pmu_irq_ops percpu_pmuirq_ops = {
+	.enable_pmuirq = armpmu_enable_percpu_pmuirq,
+	.disable_pmuirq = disable_percpu_irq,
+	.free_pmuirq = armpmu_free_percpu_pmuirq
+};
+
 static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
 static DEFINE_PER_CPU(int, cpu_irq);
+static DEFINE_PER_CPU(const struct pmu_irq_ops *, cpu_irq_ops);
 
 static inline u64 arm_pmu_event_max_period(struct perf_event *event)
 {
@@ -544,6 +582,23 @@ static int armpmu_count_irq_users(const int irq)
 	return count;
 }
 
+static const struct pmu_irq_ops *armpmu_find_irq_ops(int irq)
+{
+	const struct pmu_irq_ops *ops = NULL;
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		if (per_cpu(cpu_irq, cpu) != irq)
+			continue;
+
+		ops = per_cpu(cpu_irq_ops, cpu);
+		if (ops)
+			break;
+	}
+
+	return ops;
+}
+
 void armpmu_free_irq(int irq, int cpu)
 {
 	if (per_cpu(cpu_irq, cpu) == 0)
@@ -551,18 +606,18 @@ void armpmu_free_irq(int irq, int cpu)
 	if (WARN_ON(irq != per_cpu(cpu_irq, cpu)))
 		return;
 
-	if (!irq_is_percpu_devid(irq))
-		free_irq(irq, per_cpu_ptr(&cpu_armpmu, cpu));
-	else if (armpmu_count_irq_users(irq) == 1)
-		free_percpu_irq(irq, &cpu_armpmu);
+	per_cpu(cpu_irq_ops, cpu)->free_pmuirq(irq, cpu, &cpu_armpmu);
 
 	per_cpu(cpu_irq, cpu) = 0;
+	per_cpu(cpu_irq_ops, cpu) = NULL;
 }
 
 int armpmu_request_irq(int irq, int cpu)
 {
 	int err = 0;
 	const irq_handler_t handler = armpmu_dispatch_irq;
+	const struct pmu_irq_ops *irq_ops;
+
 	if (!irq)
 		return 0;
 
@@ -584,15 +639,26 @@ int armpmu_request_irq(int irq, int cpu)
 		irq_set_status_flags(irq, IRQ_NOAUTOEN);
 		err = request_irq(irq, handler, irq_flags, "arm-pmu",
 				  per_cpu_ptr(&cpu_armpmu, cpu));
+
+		irq_ops = &pmuirq_ops;
 	} else if (armpmu_count_irq_users(irq) == 0) {
 		err = request_percpu_irq(irq, handler, "arm-pmu",
 					 &cpu_armpmu);
+
+		irq_ops = &percpu_pmuirq_ops;
+	} else {
+		/* Per cpudevid irq was already requested by another CPU */
+		irq_ops = armpmu_find_irq_ops(irq);
+
+		if (WARN_ON(!irq_ops))
+			err = -EINVAL;
 	}
 
 	if (err)
 		goto err_out;
 
 	per_cpu(cpu_irq, cpu) = irq;
+	per_cpu(cpu_irq_ops, cpu) = irq_ops;
 	return 0;
 
 err_out:
@@ -625,12 +691,8 @@ static int arm_perf_starting_cpu(unsigned int cpu, struct hlist_node *node)
 	per_cpu(cpu_armpmu, cpu) = pmu;
 
 	irq = armpmu_get_cpu_irq(pmu, cpu);
-	if (irq) {
-		if (irq_is_percpu_devid(irq))
-			enable_percpu_irq(irq, IRQ_TYPE_NONE);
-		else
-			enable_irq(irq);
-	}
+	if (irq)
+		per_cpu(cpu_irq_ops, cpu)->enable_pmuirq(irq);
 
 	return 0;
 }
@@ -644,12 +706,8 @@ static int arm_perf_teardown_cpu(unsigned int cpu, struct hlist_node *node)
 		return 0;
 
 	irq = armpmu_get_cpu_irq(pmu, cpu);
-	if (irq) {
-		if (irq_is_percpu_devid(irq))
-			disable_percpu_irq(irq);
-		else
-			disable_irq_nosync(irq);
-	}
+	if (irq)
+		per_cpu(cpu_irq_ops, cpu)->disable_pmuirq(irq);
 
 	per_cpu(cpu_armpmu, cpu) = NULL;
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 7/7] arm_pmu: arm64: Use NMIs for PMU
  2020-09-24 11:06 [PATCH v7 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
                   ` (5 preceding siblings ...)
  2020-09-24 11:07 ` [PATCH v7 6/7] arm_pmu: Introduce pmu_irq_ops Alexandru Elisei
@ 2020-09-24 11:07 ` Alexandru Elisei
  2020-09-28 22:13 ` [PATCH v7 0/7] arm_pmu: Use NMI for perf interrupt Will Deacon
  7 siblings, 0 replies; 13+ messages in thread
From: Alexandru Elisei @ 2020-09-24 11:07 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, sumit.garg, maz, swboyd, catalin.marinas, will,
	Julien Thierry, Julien Thierry, Will Deacon

From: Julien Thierry <julien.thierry@arm.com>

Add required PMU interrupt operations for NMIs. Request interrupt lines as
NMIs when possible, otherwise fall back to normal interrupts.

NMIs are only supported on the arm64 architecture with a GICv3 irqchip.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
[Alexandru E.: Added that NMIs only work on arm64 + GICv3, print message
	when PMU is using NMIs]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 drivers/perf/arm_pmu.c | 71 +++++++++++++++++++++++++++++++++++++-----
 1 file changed, 63 insertions(+), 8 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index a770726e98d4..cb2f55f450e4 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -45,6 +45,17 @@ static const struct pmu_irq_ops pmuirq_ops = {
 	.free_pmuirq = armpmu_free_pmuirq
 };
 
+static void armpmu_free_pmunmi(unsigned int irq, int cpu, void __percpu *devid)
+{
+	free_nmi(irq, per_cpu_ptr(devid, cpu));
+}
+
+static const struct pmu_irq_ops pmunmi_ops = {
+	.enable_pmuirq = enable_nmi,
+	.disable_pmuirq = disable_nmi_nosync,
+	.free_pmuirq = armpmu_free_pmunmi
+};
+
 static void armpmu_enable_percpu_pmuirq(unsigned int irq)
 {
 	enable_percpu_irq(irq, IRQ_TYPE_NONE);
@@ -63,10 +74,37 @@ static const struct pmu_irq_ops percpu_pmuirq_ops = {
 	.free_pmuirq = armpmu_free_percpu_pmuirq
 };
 
+static void armpmu_enable_percpu_pmunmi(unsigned int irq)
+{
+	if (!prepare_percpu_nmi(irq))
+		enable_percpu_nmi(irq, IRQ_TYPE_NONE);
+}
+
+static void armpmu_disable_percpu_pmunmi(unsigned int irq)
+{
+	disable_percpu_nmi(irq);
+	teardown_percpu_nmi(irq);
+}
+
+static void armpmu_free_percpu_pmunmi(unsigned int irq, int cpu,
+				      void __percpu *devid)
+{
+	if (armpmu_count_irq_users(irq) == 1)
+		free_percpu_nmi(irq, devid);
+}
+
+static const struct pmu_irq_ops percpu_pmunmi_ops = {
+	.enable_pmuirq = armpmu_enable_percpu_pmunmi,
+	.disable_pmuirq = armpmu_disable_percpu_pmunmi,
+	.free_pmuirq = armpmu_free_percpu_pmunmi
+};
+
 static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
 static DEFINE_PER_CPU(int, cpu_irq);
 static DEFINE_PER_CPU(const struct pmu_irq_ops *, cpu_irq_ops);
 
+static bool has_nmi;
+
 static inline u64 arm_pmu_event_max_period(struct perf_event *event)
 {
 	if (event->hw.flags & ARMPMU_EVT_64BIT)
@@ -637,15 +675,31 @@ int armpmu_request_irq(int irq, int cpu)
 			    IRQF_NO_THREAD;
 
 		irq_set_status_flags(irq, IRQ_NOAUTOEN);
-		err = request_irq(irq, handler, irq_flags, "arm-pmu",
+
+		err = request_nmi(irq, handler, irq_flags, "arm-pmu",
 				  per_cpu_ptr(&cpu_armpmu, cpu));
 
-		irq_ops = &pmuirq_ops;
+		/* If cannot get an NMI, get a normal interrupt */
+		if (err) {
+			err = request_irq(irq, handler, irq_flags, "arm-pmu",
+					  per_cpu_ptr(&cpu_armpmu, cpu));
+			irq_ops = &pmuirq_ops;
+		} else {
+			has_nmi = true;
+			irq_ops = &pmunmi_ops;
+		}
 	} else if (armpmu_count_irq_users(irq) == 0) {
-		err = request_percpu_irq(irq, handler, "arm-pmu",
-					 &cpu_armpmu);
-
-		irq_ops = &percpu_pmuirq_ops;
+		err = request_percpu_nmi(irq, handler, "arm-pmu", &cpu_armpmu);
+
+		/* If cannot get an NMI, get a normal interrupt */
+		if (err) {
+			err = request_percpu_irq(irq, handler, "arm-pmu",
+						 &cpu_armpmu);
+			irq_ops = &percpu_pmuirq_ops;
+		} else {
+			has_nmi= true;
+			irq_ops = &percpu_pmunmi_ops;
+		}
 	} else {
 		/* Per cpudevid irq was already requested by another CPU */
 		irq_ops = armpmu_find_irq_ops(irq);
@@ -928,8 +982,9 @@ int armpmu_register(struct arm_pmu *pmu)
 	if (!__oprofile_cpu_pmu)
 		__oprofile_cpu_pmu = pmu;
 
-	pr_info("enabled with %s PMU driver, %d counters available\n",
-		pmu->name, pmu->num_events);
+	pr_info("enabled with %s PMU driver, %d counters available%s\n",
+		pmu->name, pmu->num_events,
+		has_nmi ? ", using NMIs" : "");
 
 	return 0;
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 5/7] KVM: arm64: pmu: Make overflow handler NMI safe
  2020-09-24 11:07 ` [PATCH v7 5/7] KVM: arm64: pmu: Make overflow handler NMI safe Alexandru Elisei
@ 2020-09-28 17:57   ` Will Deacon
  2020-09-30 12:37     ` Alexandru Elisei
  2020-09-29  8:11   ` Marc Zyngier
  1 sibling, 1 reply; 13+ messages in thread
From: Will Deacon @ 2020-09-28 17:57 UTC (permalink / raw)
  To: Alexandru Elisei
  Cc: linux-arm-kernel, linux-kernel, mark.rutland, sumit.garg, maz,
	swboyd, catalin.marinas, Julien Thierry, Julien Thierry,
	Marc Zyngier, Will Deacon, James Morse, Suzuki K Pouloze, kvm,
	kvmarm

On Thu, Sep 24, 2020 at 12:07:04PM +0100, Alexandru Elisei wrote:
> From: Julien Thierry <julien.thierry@arm.com>
> 
> kvm_vcpu_kick() is not NMI safe. When the overflow handler is called from
> NMI context, defer waking the vcpu to an irq_work queue.
> 
> A vcpu can be freed while it's not running by kvm_destroy_vm(). Prevent
> running the irq_work for a non-existent vcpu by calling irq_work_sync() on
> the PMU destroy path.
> 
> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Suzuki K Pouloze <suzuki.poulose@arm.com>
> Cc: kvm@vger.kernel.org
> Cc: kvmarm@lists.cs.columbia.edu
> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
> [Alexandru E.: Added irq_work_sync()]
> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
> ---
> I suggested in v6 that I will add an irq_work_sync() to
> kvm_pmu_vcpu_reset(). It turns out it's not necessary: a vcpu reset is done
> by the vcpu being reset with interrupts enabled, which means all the work
> has had a chance to run before the reset takes place.

I don't understand this ^^

But the patch itself looks good, so I'm going to queue this lot anyway!

Will

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 0/7] arm_pmu: Use NMI for perf interrupt
  2020-09-24 11:06 [PATCH v7 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
                   ` (6 preceding siblings ...)
  2020-09-24 11:07 ` [PATCH v7 7/7] arm_pmu: arm64: Use NMIs for PMU Alexandru Elisei
@ 2020-09-28 22:13 ` Will Deacon
  7 siblings, 0 replies; 13+ messages in thread
From: Will Deacon @ 2020-09-28 22:13 UTC (permalink / raw)
  To: Alexandru Elisei, linux-kernel, linux-arm-kernel
  Cc: catalin.marinas, kernel-team, Will Deacon, sumit.garg, swboyd,
	maz, mark.rutland

On Thu, 24 Sep 2020 12:06:59 +0100, Alexandru Elisei wrote:
> The series changes the arm_pmu driver to use NMIs for the perf interrupt
> when NMIs are available on the platform (currently, only arm64 + GICv3). To
> make it easier to play with the patches, I've pushed a branch at [1]:
> 
> $ git clone -b pmu-nmi-v7 git://linux-arm.org/linux-ae
> 
> The changes from v6 were minor, but I've still run the same tests on an
> espressobin v7*. These are the results of running perf record -a -- sleep
> 60 (all results show kernel symbols with overhead >= 1%):
> 
> [...]

Applied to will (for-next/perf), thanks!

[1/7] arm64: perf: Add missing ISB in armv8pmu_enable_counter()
      https://git.kernel.org/will/c/490d7b7c0845
[2/7] arm64: perf: Avoid PMXEV* indirection
      https://git.kernel.org/will/c/0fdf1bb75953
[3/7] arm64: perf: Remove PMU locking
      https://git.kernel.org/will/c/2a0e2a02e4b7
[4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK
      https://git.kernel.org/will/c/05ab72813340
[5/7] KVM: arm64: pmu: Make overflow handler NMI safe
      https://git.kernel.org/will/c/95e92e45a454
[6/7] arm_pmu: Introduce pmu_irq_ops
      https://git.kernel.org/will/c/f76b130bdb89
[7/7] arm_pmu: arm64: Use NMIs for PMU
      https://git.kernel.org/will/c/d8f6267f7ce5

Cheers,
-- 
Will

https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 5/7] KVM: arm64: pmu: Make overflow handler NMI safe
  2020-09-24 11:07 ` [PATCH v7 5/7] KVM: arm64: pmu: Make overflow handler NMI safe Alexandru Elisei
  2020-09-28 17:57   ` Will Deacon
@ 2020-09-29  8:11   ` Marc Zyngier
  2020-09-30 11:46     ` Alexandru Elisei
  1 sibling, 1 reply; 13+ messages in thread
From: Marc Zyngier @ 2020-09-29  8:11 UTC (permalink / raw)
  To: Alexandru Elisei
  Cc: linux-arm-kernel, linux-kernel, mark.rutland, sumit.garg, swboyd,
	catalin.marinas, will, Julien Thierry, Julien Thierry,
	Marc Zyngier, Will Deacon, James Morse, Suzuki K Pouloze, kvm,
	kvmarm

On 2020-09-24 12:07, Alexandru Elisei wrote:
> From: Julien Thierry <julien.thierry@arm.com>
> 
> kvm_vcpu_kick() is not NMI safe. When the overflow handler is called 
> from
> NMI context, defer waking the vcpu to an irq_work queue.
> 
> A vcpu can be freed while it's not running by kvm_destroy_vm(). Prevent
> running the irq_work for a non-existent vcpu by calling irq_work_sync() 
> on
> the PMU destroy path.
> 
> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Suzuki K Pouloze <suzuki.poulose@arm.com>
> Cc: kvm@vger.kernel.org
> Cc: kvmarm@lists.cs.columbia.edu
> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
> [Alexandru E.: Added irq_work_sync()]
> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
> ---
> I suggested in v6 that I will add an irq_work_sync() to
> kvm_pmu_vcpu_reset(). It turns out it's not necessary: a vcpu reset is 
> done
> by the vcpu being reset with interrupts enabled, which means all the 
> work
> has had a chance to run before the reset takes place.

I don't understand your argument about interrupts being enabled. The 
real
reason for not needing any synchronization is that all that the queued 
work
does is to kick the vcpu. Given that the vcpu is resetting, no amount of
kicking is going to change anything (it is already outside of the 
guest).

Things are obviously different on destroy, where the vcpu is actively 
going
away and we need to make sure we don't use stale data.

> 
>  arch/arm64/kvm/pmu-emul.c | 26 +++++++++++++++++++++++++-
>  include/kvm/arm_pmu.h     |  1 +
>  2 files changed, 26 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index f0d0312c0a55..81916e360b1e 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -269,6 +269,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
> 
>  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++)
>  		kvm_pmu_release_perf_event(&pmu->pmc[i]);
> +	irq_work_sync(&vcpu->arch.pmu.overflow_work);
>  }
> 
>  u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> @@ -433,6 +434,22 @@ void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
>  	kvm_pmu_update_state(vcpu);
>  }
> 
> +/**
> + * When perf interrupt is an NMI, we cannot safely notify the vcpu
> corresponding
> + * to the event.
> + * This is why we need a callback to do it once outside of the NMI 
> context.
> + */
> +static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work)
> +{
> +	struct kvm_vcpu *vcpu;
> +	struct kvm_pmu *pmu;
> +
> +	pmu = container_of(work, struct kvm_pmu, overflow_work);
> +	vcpu = kvm_pmc_to_vcpu(pmu->pmc);
> +
> +	kvm_vcpu_kick(vcpu);
> +}
> +
>  /**
>   * When the perf event overflows, set the overflow status and inform 
> the vcpu.
>   */
> @@ -465,7 +482,11 @@ static void kvm_pmu_perf_overflow(struct
> perf_event *perf_event,
> 
>  	if (kvm_pmu_overflow_status(vcpu)) {
>  		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
> -		kvm_vcpu_kick(vcpu);
> +
> +		if (!in_nmi())
> +			kvm_vcpu_kick(vcpu);
> +		else
> +			irq_work_queue(&vcpu->arch.pmu.overflow_work);
>  	}
> 
>  	cpu_pmu->pmu.start(perf_event, PERF_EF_RELOAD);
> @@ -764,6 +785,9 @@ static int kvm_arm_pmu_v3_init(struct kvm_vcpu 
> *vcpu)
>  			return ret;
>  	}
> 
> +	init_irq_work(&vcpu->arch.pmu.overflow_work,
> +		      kvm_pmu_perf_overflow_notify_vcpu);
> +
>  	vcpu->arch.pmu.created = true;
>  	return 0;
>  }
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index 6db030439e29..dbf4f08d42e5 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -27,6 +27,7 @@ struct kvm_pmu {
>  	bool ready;
>  	bool created;
>  	bool irq_level;
> +	struct irq_work overflow_work;

Nit: placing this new field right after the pmc array would avoid 
creating
an unnecessary padding in the structure. Not a big deal, and definitely
something we can sort out when applying the patch.

>  };
> 
>  #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)

Reviewed-by: Marc Zyngier <maz@kernel.org>

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 5/7] KVM: arm64: pmu: Make overflow handler NMI safe
  2020-09-29  8:11   ` Marc Zyngier
@ 2020-09-30 11:46     ` Alexandru Elisei
  0 siblings, 0 replies; 13+ messages in thread
From: Alexandru Elisei @ 2020-09-30 11:46 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, linux-kernel, mark.rutland, sumit.garg, swboyd,
	catalin.marinas, will, Julien Thierry, Julien Thierry,
	Marc Zyngier, Will Deacon, James Morse, Suzuki K Pouloze, kvm,
	kvmarm

Hello,

On 9/29/20 9:11 AM, Marc Zyngier wrote:
> On 2020-09-24 12:07, Alexandru Elisei wrote:
>> From: Julien Thierry <julien.thierry@arm.com>
>>
>> kvm_vcpu_kick() is not NMI safe. When the overflow handler is called from
>> NMI context, defer waking the vcpu to an irq_work queue.
>>
>> A vcpu can be freed while it's not running by kvm_destroy_vm(). Prevent
>> running the irq_work for a non-existent vcpu by calling irq_work_sync() on
>> the PMU destroy path.
>>
>> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: James Morse <james.morse@arm.com>
>> Cc: Suzuki K Pouloze <suzuki.poulose@arm.com>
>> Cc: kvm@vger.kernel.org
>> Cc: kvmarm@lists.cs.columbia.edu
>> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
>> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
>> [Alexandru E.: Added irq_work_sync()]
>> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
>> ---
>> I suggested in v6 that I will add an irq_work_sync() to
>> kvm_pmu_vcpu_reset(). It turns out it's not necessary: a vcpu reset is done
>> by the vcpu being reset with interrupts enabled, which means all the work
>> has had a chance to run before the reset takes place.
>
> I don't understand your argument about interrupts being enabled. The real
> reason for not needing any synchronization is that all that the queued work
> does is to kick the vcpu. Given that the vcpu is resetting, no amount of
> kicking is going to change anything (it is already outside of the guest).
>
> Things are obviously different on destroy, where the vcpu is actively going
> away and we need to make sure we don't use stale data.

Like you and Will noticed, the above really doesn't make much sense. The reason we
don't need to wait for the irq_work to be finished on reset is indeed that the
vcpu isn't freed, so we will never trigger a use-after-free bug.

>
>>
>>  arch/arm64/kvm/pmu-emul.c | 26 +++++++++++++++++++++++++-
>>  include/kvm/arm_pmu.h     |  1 +
>>  2 files changed, 26 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
>> index f0d0312c0a55..81916e360b1e 100644
>> --- a/arch/arm64/kvm/pmu-emul.c
>> +++ b/arch/arm64/kvm/pmu-emul.c
>> @@ -269,6 +269,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
>>
>>      for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++)
>>          kvm_pmu_release_perf_event(&pmu->pmc[i]);
>> +    irq_work_sync(&vcpu->arch.pmu.overflow_work);
>>  }
>>
>>  u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
>> @@ -433,6 +434,22 @@ void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
>>      kvm_pmu_update_state(vcpu);
>>  }
>>
>> +/**
>> + * When perf interrupt is an NMI, we cannot safely notify the vcpu
>> corresponding
>> + * to the event.
>> + * This is why we need a callback to do it once outside of the NMI context.
>> + */
>> +static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work)
>> +{
>> +    struct kvm_vcpu *vcpu;
>> +    struct kvm_pmu *pmu;
>> +
>> +    pmu = container_of(work, struct kvm_pmu, overflow_work);
>> +    vcpu = kvm_pmc_to_vcpu(pmu->pmc);
>> +
>> +    kvm_vcpu_kick(vcpu);
>> +}
>> +
>>  /**
>>   * When the perf event overflows, set the overflow status and inform the vcpu.
>>   */
>> @@ -465,7 +482,11 @@ static void kvm_pmu_perf_overflow(struct
>> perf_event *perf_event,
>>
>>      if (kvm_pmu_overflow_status(vcpu)) {
>>          kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
>> -        kvm_vcpu_kick(vcpu);
>> +
>> +        if (!in_nmi())
>> +            kvm_vcpu_kick(vcpu);
>> +        else
>> +            irq_work_queue(&vcpu->arch.pmu.overflow_work);
>>      }
>>
>>      cpu_pmu->pmu.start(perf_event, PERF_EF_RELOAD);
>> @@ -764,6 +785,9 @@ static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu)
>>              return ret;
>>      }
>>
>> +    init_irq_work(&vcpu->arch.pmu.overflow_work,
>> +              kvm_pmu_perf_overflow_notify_vcpu);
>> +
>>      vcpu->arch.pmu.created = true;
>>      return 0;
>>  }
>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>> index 6db030439e29..dbf4f08d42e5 100644
>> --- a/include/kvm/arm_pmu.h
>> +++ b/include/kvm/arm_pmu.h
>> @@ -27,6 +27,7 @@ struct kvm_pmu {
>>      bool ready;
>>      bool created;
>>      bool irq_level;
>> +    struct irq_work overflow_work;
>
> Nit: placing this new field right after the pmc array would avoid creating
> an unnecessary padding in the structure. Not a big deal, and definitely
> something we can sort out when applying the patch.

That makes sense, overflow_work must be aligned to 8 bytes, and there are 16
elements in the pmc array, which means no padding is required for the
overflow_work field.

Thanks,
Alex

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 5/7] KVM: arm64: pmu: Make overflow handler NMI safe
  2020-09-28 17:57   ` Will Deacon
@ 2020-09-30 12:37     ` Alexandru Elisei
  0 siblings, 0 replies; 13+ messages in thread
From: Alexandru Elisei @ 2020-09-30 12:37 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-kernel, mark.rutland, sumit.garg, maz,
	swboyd, catalin.marinas, Julien Thierry, Julien Thierry,
	Marc Zyngier, Will Deacon, James Morse, Suzuki K Pouloze, kvm,
	kvmarm

Hi Will,

On 9/28/20 6:57 PM, Will Deacon wrote:

> On Thu, Sep 24, 2020 at 12:07:04PM +0100, Alexandru Elisei wrote:
>> From: Julien Thierry <julien.thierry@arm.com>
>>
>> kvm_vcpu_kick() is not NMI safe. When the overflow handler is called from
>> NMI context, defer waking the vcpu to an irq_work queue.
>>
>> A vcpu can be freed while it's not running by kvm_destroy_vm(). Prevent
>> running the irq_work for a non-existent vcpu by calling irq_work_sync() on
>> the PMU destroy path.
>>
>> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: James Morse <james.morse@arm.com>
>> Cc: Suzuki K Pouloze <suzuki.poulose@arm.com>
>> Cc: kvm@vger.kernel.org
>> Cc: kvmarm@lists.cs.columbia.edu
>> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
>> Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
>> [Alexandru E.: Added irq_work_sync()]
>> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
>> ---
>> I suggested in v6 that I will add an irq_work_sync() to
>> kvm_pmu_vcpu_reset(). It turns out it's not necessary: a vcpu reset is done
>> by the vcpu being reset with interrupts enabled, which means all the work
>> has had a chance to run before the reset takes place.
> I don't understand this ^^

Marc had the same comment, I replied in his email. I thought about it and you're
right, it doesn't make much sense.

>
> But the patch itself looks good, so I'm going to queue this lot anyway!

Thank you for picking up the series!

Thanks,
Alex

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-09-30 12:36 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-24 11:06 [PATCH v7 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
2020-09-24 11:07 ` [PATCH v7 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_counter() Alexandru Elisei
2020-09-24 11:07 ` [PATCH v7 2/7] arm64: perf: Avoid PMXEV* indirection Alexandru Elisei
2020-09-24 11:07 ` [PATCH v7 3/7] arm64: perf: Remove PMU locking Alexandru Elisei
2020-09-24 11:07 ` [PATCH v7 4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK Alexandru Elisei
2020-09-24 11:07 ` [PATCH v7 5/7] KVM: arm64: pmu: Make overflow handler NMI safe Alexandru Elisei
2020-09-28 17:57   ` Will Deacon
2020-09-30 12:37     ` Alexandru Elisei
2020-09-29  8:11   ` Marc Zyngier
2020-09-30 11:46     ` Alexandru Elisei
2020-09-24 11:07 ` [PATCH v7 6/7] arm_pmu: Introduce pmu_irq_ops Alexandru Elisei
2020-09-24 11:07 ` [PATCH v7 7/7] arm_pmu: arm64: Use NMIs for PMU Alexandru Elisei
2020-09-28 22:13 ` [PATCH v7 0/7] arm_pmu: Use NMI for perf interrupt Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).