All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/7] arm_pmu: Use NMI for perf interrupt
@ 2020-06-17 11:38 ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel; +Cc: maz, will, catalin.marinas, mark.rutland

The series makes the arm_pmu driver use NMIs for the perf interrupt when
NMIs are available on the platform (currently, only arm64 + GICv3). To make
it easier to play with the patches, I've pushed a branch at [1]:

$ git clone -b pmu-nmi-v5 git://linux-arm.org/linux-ae

I've tested the series on an espressobin v7*. These are the results of
running perf record -a -- sleep 60:

1. Without the patches:

    12.68%  [k] _raw_spin_unlock_irq
    11.08%  [k] arch_cpu_idle
     7.86%  [k] _raw_spin_unlock_irqrestore
     6.05%  [k] arch_counter_get_cntpct
     2.86%  [k] __softirqentry_text_start
     2.33%  [k] tick_nohz_idle_exit
     [..]

2. Using NMIs:

    19.45%  [k] arch_counter_get_cntpct
     5.14%  [k] __delay
     3.32%  [k] wait_for_xmitr
     1.99%  [k] ktime_get
     1.00%  [k] _raw_write_lock_irqsave
     1.00%  [.] avahi_escape_label
     [..]

When running perf record -a -- iperf3 -c 127.0.0.1 -t 60:

1. Without the patches:

    24.70%  [k] __arch_copy_from_user
    21.77%  [k] __arch_copy_to_user
     5.21%  [k] _raw_spin_unlock_irq
     2.86%  [k] _raw_spin_unlock_irqrestore
     1.99%  [k] __free_pages_ok
     1.61%  [k] arch_cpu_idle
     [..]

2. Using NMIs:

    23.84%  [k] __arch_copy_from_user
    23.44%  [k] __arch_copy_to_user
     1.23%  [k] get_page_from_freelist
     1.16%  [k] tcp_ack
     0.80%  [k] __free_pages_ok
     0.78%  [k] tcp_sendmsg_locked
     [..]

I've ran the same tests in a VM when both host+guest use NMIs, and when
neither use them. All of these tests were also ran on the model.  Similar
results in all cases.

* All the firmware versions for espressobin v7 that I've tried clear
SCR_EL3.FIQ, which means that NMIs don't work. To make them work on the
board, I modified the GICv3 driver. That's why I would really appreciate
someone testing this series on a board where NMIs work without any GIC
changes.

Summary of the patches:
* Patch 1 is a fix for a bug that Julien found during the review for v4.
* Patches 2 and 3 remove locking from arm64 perf event code.
* Patches 4 and 5 makes the arm64 PMU interrupt handler NMI safe.
* Patches 6 and 7 enable the use of NMIs on arm64 with a GICv3 irqchip.

Changes since v4 [2]:
- Rebased on top of v5.8-rc1 and dropped the Tested-by tags because it's
  been almost a year since the series has been tested.
- Dropped patch 3 because I couldn't find any instance where
  armv7pmu_read_counter() was called with interrupts enabled. I've also
  tested this by running several instances of perf for a few hours, and the
  function was called every time with interrupts disabled.
- Dropped patches 4 and 5 because the tradeoff wasn't worth it in my
  opinion: the irq handler was slower all the time (because it
  saved/restored the counter select register), in exchange for being
  slightly faster on the rare ocassions when it triggered at the beginning
  of the critical sections.
- Minor changes here and there to address review comments.

Changes since v3 [3]:
- Added tags
- Fix build issue for perf_event_v6
- Don't disable preemption in pmu->enable()
- Always rely on IPI_IRQ_WORK to run the queued work
- Fixed typos + cleanups

Changes since v2 [4]:
- Rebased on recent linux-next (next-20190708)
- Fixed a number of bugs with indices (reported by Wei)
- Minor style fixes

Changes since v1 [5]:
- Rebased on v5.1-rc1
- Pseudo-NMI has changed a lot since then, use the (now merged) NMI API
- Remove locking from armv7 perf_event
- Use locking only in armv6 perf_event
- Use direct counter/type registers insted of selector register for armv8

[1] http://www.linux-arm.org/git?p=linux-ae.git;a=shortlog;h=refs/heads/pmu-nmi-v5
[2] https://lists.infradead.org/pipermail/linux-arm-kernel/2019-July/666824.html
[3] https://lists.infradead.org/pipermail/linux-arm-kernel/2019-July/665339.html
[4] https://lists.infradead.org/pipermail/linux-arm-kernel/2019-March/640536.html
[5] https://lists.infradead.org/pipermail/linux-arm-kernel/2018-January/554611.html

Alexandru Elisei (1):
  arm64: perf: Add missing ISB in armv8pmu_enable_event()

Julien Thierry (5):
  arm64: perf: Remove PMU locking
  arm64: perf: Defer irq_work to IPI_IRQ_WORK
  arm64: kvm: pmu: Make overflow handler NMI safe
  arm_pmu: Introduce pmu_irq_ops
  arm_pmu: arm64: Use NMIs for PMU

Mark Rutland (1):
  arm64: perf: Avoid PMXEV* indirection

 arch/arm64/kernel/perf_event.c | 138 ++++++++++++++++++++------------
 arch/arm64/kvm/pmu-emul.c      |  25 +++++-
 drivers/perf/arm_pmu.c         | 142 ++++++++++++++++++++++++++++-----
 include/kvm/arm_pmu.h          |   1 +
 4 files changed, 235 insertions(+), 71 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH v5 0/7] arm_pmu: Use NMI for perf interrupt
@ 2020-06-17 11:38 ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel; +Cc: mark.rutland, maz, will, catalin.marinas

The series makes the arm_pmu driver use NMIs for the perf interrupt when
NMIs are available on the platform (currently, only arm64 + GICv3). To make
it easier to play with the patches, I've pushed a branch at [1]:

$ git clone -b pmu-nmi-v5 git://linux-arm.org/linux-ae

I've tested the series on an espressobin v7*. These are the results of
running perf record -a -- sleep 60:

1. Without the patches:

    12.68%  [k] _raw_spin_unlock_irq
    11.08%  [k] arch_cpu_idle
     7.86%  [k] _raw_spin_unlock_irqrestore
     6.05%  [k] arch_counter_get_cntpct
     2.86%  [k] __softirqentry_text_start
     2.33%  [k] tick_nohz_idle_exit
     [..]

2. Using NMIs:

    19.45%  [k] arch_counter_get_cntpct
     5.14%  [k] __delay
     3.32%  [k] wait_for_xmitr
     1.99%  [k] ktime_get
     1.00%  [k] _raw_write_lock_irqsave
     1.00%  [.] avahi_escape_label
     [..]

When running perf record -a -- iperf3 -c 127.0.0.1 -t 60:

1. Without the patches:

    24.70%  [k] __arch_copy_from_user
    21.77%  [k] __arch_copy_to_user
     5.21%  [k] _raw_spin_unlock_irq
     2.86%  [k] _raw_spin_unlock_irqrestore
     1.99%  [k] __free_pages_ok
     1.61%  [k] arch_cpu_idle
     [..]

2. Using NMIs:

    23.84%  [k] __arch_copy_from_user
    23.44%  [k] __arch_copy_to_user
     1.23%  [k] get_page_from_freelist
     1.16%  [k] tcp_ack
     0.80%  [k] __free_pages_ok
     0.78%  [k] tcp_sendmsg_locked
     [..]

I've ran the same tests in a VM when both host+guest use NMIs, and when
neither use them. All of these tests were also ran on the model.  Similar
results in all cases.

* All the firmware versions for espressobin v7 that I've tried clear
SCR_EL3.FIQ, which means that NMIs don't work. To make them work on the
board, I modified the GICv3 driver. That's why I would really appreciate
someone testing this series on a board where NMIs work without any GIC
changes.

Summary of the patches:
* Patch 1 is a fix for a bug that Julien found during the review for v4.
* Patches 2 and 3 remove locking from arm64 perf event code.
* Patches 4 and 5 makes the arm64 PMU interrupt handler NMI safe.
* Patches 6 and 7 enable the use of NMIs on arm64 with a GICv3 irqchip.

Changes since v4 [2]:
- Rebased on top of v5.8-rc1 and dropped the Tested-by tags because it's
  been almost a year since the series has been tested.
- Dropped patch 3 because I couldn't find any instance where
  armv7pmu_read_counter() was called with interrupts enabled. I've also
  tested this by running several instances of perf for a few hours, and the
  function was called every time with interrupts disabled.
- Dropped patches 4 and 5 because the tradeoff wasn't worth it in my
  opinion: the irq handler was slower all the time (because it
  saved/restored the counter select register), in exchange for being
  slightly faster on the rare ocassions when it triggered at the beginning
  of the critical sections.
- Minor changes here and there to address review comments.

Changes since v3 [3]:
- Added tags
- Fix build issue for perf_event_v6
- Don't disable preemption in pmu->enable()
- Always rely on IPI_IRQ_WORK to run the queued work
- Fixed typos + cleanups

Changes since v2 [4]:
- Rebased on recent linux-next (next-20190708)
- Fixed a number of bugs with indices (reported by Wei)
- Minor style fixes

Changes since v1 [5]:
- Rebased on v5.1-rc1
- Pseudo-NMI has changed a lot since then, use the (now merged) NMI API
- Remove locking from armv7 perf_event
- Use locking only in armv6 perf_event
- Use direct counter/type registers insted of selector register for armv8

[1] http://www.linux-arm.org/git?p=linux-ae.git;a=shortlog;h=refs/heads/pmu-nmi-v5
[2] https://lists.infradead.org/pipermail/linux-arm-kernel/2019-July/666824.html
[3] https://lists.infradead.org/pipermail/linux-arm-kernel/2019-July/665339.html
[4] https://lists.infradead.org/pipermail/linux-arm-kernel/2019-March/640536.html
[5] https://lists.infradead.org/pipermail/linux-arm-kernel/2018-January/554611.html

Alexandru Elisei (1):
  arm64: perf: Add missing ISB in armv8pmu_enable_event()

Julien Thierry (5):
  arm64: perf: Remove PMU locking
  arm64: perf: Defer irq_work to IPI_IRQ_WORK
  arm64: kvm: pmu: Make overflow handler NMI safe
  arm_pmu: Introduce pmu_irq_ops
  arm_pmu: arm64: Use NMIs for PMU

Mark Rutland (1):
  arm64: perf: Avoid PMXEV* indirection

 arch/arm64/kernel/perf_event.c | 138 ++++++++++++++++++++------------
 arch/arm64/kvm/pmu-emul.c      |  25 +++++-
 drivers/perf/arm_pmu.c         | 142 ++++++++++++++++++++++++++++-----
 include/kvm/arm_pmu.h          |   1 +
 4 files changed, 235 insertions(+), 71 deletions(-)

-- 
2.27.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH v5 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_event()
  2020-06-17 11:38 ` Alexandru Elisei
@ 2020-06-17 11:38   ` Alexandru Elisei
  -1 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: maz, will, catalin.marinas, mark.rutland, Julien Thierry,
	Will Deacon, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Alexander Shishkin, Jiri Olsa,
	Namhyung Kim, Julien Thierry

Writes to the PMXEVTYPER_EL0 register are not self-synchronising. In
armv8pmu_enable_event(), the PE can reorder configuring the event type
after we have enabled the counter and the interrupt. This can lead to an
interrupt being asserted because the of the previous event type that we
were counting, not the one that we've just enabled.

The same rationale applies to writes to the PMINTENSET_EL1 register. The PE
can reorder enabling the interrupt at any point in the future after we have
enabled the event.

Prevent both situations from happening by adding an ISB just before we
enable the event counter.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Fixes: 030896885ade ("arm64: Performance counters support")
Reported-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kernel/perf_event.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 4d7879484cec..ee180b2a5b39 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -605,6 +605,7 @@ static void armv8pmu_enable_event(struct perf_event *event)
 	 * Enable interrupt for this counter
 	 */
 	armv8pmu_enable_event_irq(event);
+	isb();
 
 	/*
 	 * Enable counter
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v5 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_event()
@ 2020-06-17 11:38   ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, Peter Zijlstra, maz, Jiri Olsa,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, will, Julien Thierry

Writes to the PMXEVTYPER_EL0 register are not self-synchronising. In
armv8pmu_enable_event(), the PE can reorder configuring the event type
after we have enabled the counter and the interrupt. This can lead to an
interrupt being asserted because the of the previous event type that we
were counting, not the one that we've just enabled.

The same rationale applies to writes to the PMINTENSET_EL1 register. The PE
can reorder enabling the interrupt at any point in the future after we have
enabled the event.

Prevent both situations from happening by adding an ISB just before we
enable the event counter.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Fixes: 030896885ade ("arm64: Performance counters support")
Reported-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kernel/perf_event.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 4d7879484cec..ee180b2a5b39 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -605,6 +605,7 @@ static void armv8pmu_enable_event(struct perf_event *event)
 	 * Enable interrupt for this counter
 	 */
 	armv8pmu_enable_event_irq(event);
+	isb();
 
 	/*
 	 * Enable counter
-- 
2.27.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v5 2/7] arm64: perf: Avoid PMXEV* indirection
  2020-06-17 11:38 ` Alexandru Elisei
@ 2020-06-17 11:38   ` Alexandru Elisei
  -1 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: maz, will, catalin.marinas, mark.rutland, Julien Thierry,
	Will Deacon, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Alexander Shishkin, Jiri Olsa,
	Namhyung Kim, Julien Thierry

From: Mark Rutland <mark.rutland@arm.com>

Currently we access the counter registers and their respective type
registers indirectly. This requires us to write to PMSELR, issue an ISB,
then access the relevant PMXEV* registers.

This is unfortunate, because:

* Under virtualization, accessing one registers requires two traps to
  the hypervisor, even though we could access the register directly with
  a single trap.

* We have to issue an ISB which we could otherwise avoid the cost of.

* When we use NMIs, the NMI handler will have to save/restore the select
  register in case the code it preempted was attempting to access a
  counter or its type register.

We can avoid these issues by directly accessing the relevant registers.
This patch adds helpers to do so.

In armv8pmu_enable_event() we still need the ISB to prevent the PE from
reordering the write to PMINTENSET_EL1 register. If the interrupt is
enabled before we disable the counter and the new event is configured,
we might get an interrupt triggered by the previously programmed event
overflowing, but which we wrongly attribute to the event that we are
enabling.

In the process, remove the comment that refers to the ARMv7 PMU.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[Julien T.: Don't inline read/write functions to avoid big code-size
	increase, remove unused read_pmevtypern function,
	fix counter index issue.]
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[Removed comment, removed trailing semicolons in macros, added ISB]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kernel/perf_event.c | 95 +++++++++++++++++++++++++++++-----
 1 file changed, 81 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index ee180b2a5b39..e95b5ca70a53 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -323,6 +323,73 @@ static inline bool armv8pmu_event_is_chained(struct perf_event *event)
 #define	ARMV8_IDX_TO_COUNTER(x)	\
 	(((x) - ARMV8_IDX_COUNTER0) & ARMV8_PMU_COUNTER_MASK)
 
+/*
+ * This code is really good
+ */
+
+#define PMEVN_CASE(n, case_macro) \
+	case n: case_macro(n); break
+
+#define PMEVN_SWITCH(x, case_macro)				\
+	do {							\
+		switch (x) {					\
+		PMEVN_CASE(0,  case_macro);			\
+		PMEVN_CASE(1,  case_macro);			\
+		PMEVN_CASE(2,  case_macro);			\
+		PMEVN_CASE(3,  case_macro);			\
+		PMEVN_CASE(4,  case_macro);			\
+		PMEVN_CASE(5,  case_macro);			\
+		PMEVN_CASE(6,  case_macro);			\
+		PMEVN_CASE(7,  case_macro);			\
+		PMEVN_CASE(8,  case_macro);			\
+		PMEVN_CASE(9,  case_macro);			\
+		PMEVN_CASE(10, case_macro);			\
+		PMEVN_CASE(11, case_macro);			\
+		PMEVN_CASE(12, case_macro);			\
+		PMEVN_CASE(13, case_macro);			\
+		PMEVN_CASE(14, case_macro);			\
+		PMEVN_CASE(15, case_macro);			\
+		PMEVN_CASE(16, case_macro);			\
+		PMEVN_CASE(17, case_macro);			\
+		PMEVN_CASE(18, case_macro);			\
+		PMEVN_CASE(19, case_macro);			\
+		PMEVN_CASE(20, case_macro);			\
+		PMEVN_CASE(21, case_macro);			\
+		PMEVN_CASE(22, case_macro);			\
+		PMEVN_CASE(23, case_macro);			\
+		PMEVN_CASE(24, case_macro);			\
+		PMEVN_CASE(25, case_macro);			\
+		PMEVN_CASE(26, case_macro);			\
+		PMEVN_CASE(27, case_macro);			\
+		PMEVN_CASE(28, case_macro);			\
+		PMEVN_CASE(29, case_macro);			\
+		PMEVN_CASE(30, case_macro);			\
+		default: WARN(1, "Invalid PMEV* index");	\
+		}						\
+	} while (0)
+
+#define RETURN_READ_PMEVCNTRN(n) \
+	return read_sysreg(pmevcntr##n##_el0)
+static unsigned long read_pmevcntrn(int n)
+{
+	PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
+	return 0;
+}
+
+#define WRITE_PMEVCNTRN(n) \
+	write_sysreg(val, pmevcntr##n##_el0)
+static void write_pmevcntrn(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
+}
+
+#define WRITE_PMEVTYPERN(n) \
+	write_sysreg(val, pmevtyper##n##_el0)
+static void write_pmevtypern(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
+}
+
 static inline u32 armv8pmu_pmcr_read(void)
 {
 	return read_sysreg(pmcr_el0);
@@ -351,17 +418,11 @@ static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx)
 	return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx));
 }
 
-static inline void armv8pmu_select_counter(int idx)
+static inline u32 armv8pmu_read_evcntr(int idx)
 {
 	u32 counter = ARMV8_IDX_TO_COUNTER(idx);
-	write_sysreg(counter, pmselr_el0);
-	isb();
-}
 
-static inline u64 armv8pmu_read_evcntr(int idx)
-{
-	armv8pmu_select_counter(idx);
-	return read_sysreg(pmxevcntr_el0);
+	return read_pmevcntrn(counter);
 }
 
 static inline u64 armv8pmu_read_hw_counter(struct perf_event *event)
@@ -433,8 +494,9 @@ static u64 armv8pmu_read_counter(struct perf_event *event)
 
 static inline void armv8pmu_write_evcntr(int idx, u64 value)
 {
-	armv8pmu_select_counter(idx);
-	write_sysreg(value, pmxevcntr_el0);
+	u32 counter = ARMV8_IDX_TO_COUNTER(idx);
+
+	write_pmevcntrn(counter, value);
 }
 
 static inline void armv8pmu_write_hw_counter(struct perf_event *event,
@@ -469,9 +531,10 @@ static void armv8pmu_write_counter(struct perf_event *event, u64 value)
 
 static inline void armv8pmu_write_evtype(int idx, u32 val)
 {
-	armv8pmu_select_counter(idx);
+	u32 counter = ARMV8_IDX_TO_COUNTER(idx);
+
 	val &= ARMV8_PMU_EVTYPE_MASK;
-	write_sysreg(val, pmxevtyper_el0);
+	write_pmevtypern(counter, val);
 }
 
 static inline void armv8pmu_write_event_type(struct perf_event *event)
@@ -491,7 +554,10 @@ static inline void armv8pmu_write_event_type(struct perf_event *event)
 		armv8pmu_write_evtype(idx - 1, hwc->config_base);
 		armv8pmu_write_evtype(idx, chain_evt);
 	} else {
-		armv8pmu_write_evtype(idx, hwc->config_base);
+		if (idx == ARMV8_IDX_CYCLE_COUNTER)
+			write_sysreg(hwc->config_base, pmccfiltr_el0);
+		else
+			armv8pmu_write_evtype(idx, hwc->config_base);
 	}
 }
 
@@ -595,9 +661,10 @@ static void armv8pmu_enable_event(struct perf_event *event)
 	 * Disable counter
 	 */
 	armv8pmu_disable_event_counter(event);
+	isb();
 
 	/*
-	 * Set event (if destined for PMNx counters).
+	 * Set event.
 	 */
 	armv8pmu_write_event_type(event);
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v5 2/7] arm64: perf: Avoid PMXEV* indirection
@ 2020-06-17 11:38   ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, Peter Zijlstra, maz, Jiri Olsa,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, will, Julien Thierry

From: Mark Rutland <mark.rutland@arm.com>

Currently we access the counter registers and their respective type
registers indirectly. This requires us to write to PMSELR, issue an ISB,
then access the relevant PMXEV* registers.

This is unfortunate, because:

* Under virtualization, accessing one registers requires two traps to
  the hypervisor, even though we could access the register directly with
  a single trap.

* We have to issue an ISB which we could otherwise avoid the cost of.

* When we use NMIs, the NMI handler will have to save/restore the select
  register in case the code it preempted was attempting to access a
  counter or its type register.

We can avoid these issues by directly accessing the relevant registers.
This patch adds helpers to do so.

In armv8pmu_enable_event() we still need the ISB to prevent the PE from
reordering the write to PMINTENSET_EL1 register. If the interrupt is
enabled before we disable the counter and the new event is configured,
we might get an interrupt triggered by the previously programmed event
overflowing, but which we wrongly attribute to the event that we are
enabling.

In the process, remove the comment that refers to the ARMv7 PMU.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[Julien T.: Don't inline read/write functions to avoid big code-size
	increase, remove unused read_pmevtypern function,
	fix counter index issue.]
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[Removed comment, removed trailing semicolons in macros, added ISB]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kernel/perf_event.c | 95 +++++++++++++++++++++++++++++-----
 1 file changed, 81 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index ee180b2a5b39..e95b5ca70a53 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -323,6 +323,73 @@ static inline bool armv8pmu_event_is_chained(struct perf_event *event)
 #define	ARMV8_IDX_TO_COUNTER(x)	\
 	(((x) - ARMV8_IDX_COUNTER0) & ARMV8_PMU_COUNTER_MASK)
 
+/*
+ * This code is really good
+ */
+
+#define PMEVN_CASE(n, case_macro) \
+	case n: case_macro(n); break
+
+#define PMEVN_SWITCH(x, case_macro)				\
+	do {							\
+		switch (x) {					\
+		PMEVN_CASE(0,  case_macro);			\
+		PMEVN_CASE(1,  case_macro);			\
+		PMEVN_CASE(2,  case_macro);			\
+		PMEVN_CASE(3,  case_macro);			\
+		PMEVN_CASE(4,  case_macro);			\
+		PMEVN_CASE(5,  case_macro);			\
+		PMEVN_CASE(6,  case_macro);			\
+		PMEVN_CASE(7,  case_macro);			\
+		PMEVN_CASE(8,  case_macro);			\
+		PMEVN_CASE(9,  case_macro);			\
+		PMEVN_CASE(10, case_macro);			\
+		PMEVN_CASE(11, case_macro);			\
+		PMEVN_CASE(12, case_macro);			\
+		PMEVN_CASE(13, case_macro);			\
+		PMEVN_CASE(14, case_macro);			\
+		PMEVN_CASE(15, case_macro);			\
+		PMEVN_CASE(16, case_macro);			\
+		PMEVN_CASE(17, case_macro);			\
+		PMEVN_CASE(18, case_macro);			\
+		PMEVN_CASE(19, case_macro);			\
+		PMEVN_CASE(20, case_macro);			\
+		PMEVN_CASE(21, case_macro);			\
+		PMEVN_CASE(22, case_macro);			\
+		PMEVN_CASE(23, case_macro);			\
+		PMEVN_CASE(24, case_macro);			\
+		PMEVN_CASE(25, case_macro);			\
+		PMEVN_CASE(26, case_macro);			\
+		PMEVN_CASE(27, case_macro);			\
+		PMEVN_CASE(28, case_macro);			\
+		PMEVN_CASE(29, case_macro);			\
+		PMEVN_CASE(30, case_macro);			\
+		default: WARN(1, "Invalid PMEV* index");	\
+		}						\
+	} while (0)
+
+#define RETURN_READ_PMEVCNTRN(n) \
+	return read_sysreg(pmevcntr##n##_el0)
+static unsigned long read_pmevcntrn(int n)
+{
+	PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
+	return 0;
+}
+
+#define WRITE_PMEVCNTRN(n) \
+	write_sysreg(val, pmevcntr##n##_el0)
+static void write_pmevcntrn(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
+}
+
+#define WRITE_PMEVTYPERN(n) \
+	write_sysreg(val, pmevtyper##n##_el0)
+static void write_pmevtypern(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
+}
+
 static inline u32 armv8pmu_pmcr_read(void)
 {
 	return read_sysreg(pmcr_el0);
@@ -351,17 +418,11 @@ static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx)
 	return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx));
 }
 
-static inline void armv8pmu_select_counter(int idx)
+static inline u32 armv8pmu_read_evcntr(int idx)
 {
 	u32 counter = ARMV8_IDX_TO_COUNTER(idx);
-	write_sysreg(counter, pmselr_el0);
-	isb();
-}
 
-static inline u64 armv8pmu_read_evcntr(int idx)
-{
-	armv8pmu_select_counter(idx);
-	return read_sysreg(pmxevcntr_el0);
+	return read_pmevcntrn(counter);
 }
 
 static inline u64 armv8pmu_read_hw_counter(struct perf_event *event)
@@ -433,8 +494,9 @@ static u64 armv8pmu_read_counter(struct perf_event *event)
 
 static inline void armv8pmu_write_evcntr(int idx, u64 value)
 {
-	armv8pmu_select_counter(idx);
-	write_sysreg(value, pmxevcntr_el0);
+	u32 counter = ARMV8_IDX_TO_COUNTER(idx);
+
+	write_pmevcntrn(counter, value);
 }
 
 static inline void armv8pmu_write_hw_counter(struct perf_event *event,
@@ -469,9 +531,10 @@ static void armv8pmu_write_counter(struct perf_event *event, u64 value)
 
 static inline void armv8pmu_write_evtype(int idx, u32 val)
 {
-	armv8pmu_select_counter(idx);
+	u32 counter = ARMV8_IDX_TO_COUNTER(idx);
+
 	val &= ARMV8_PMU_EVTYPE_MASK;
-	write_sysreg(val, pmxevtyper_el0);
+	write_pmevtypern(counter, val);
 }
 
 static inline void armv8pmu_write_event_type(struct perf_event *event)
@@ -491,7 +554,10 @@ static inline void armv8pmu_write_event_type(struct perf_event *event)
 		armv8pmu_write_evtype(idx - 1, hwc->config_base);
 		armv8pmu_write_evtype(idx, chain_evt);
 	} else {
-		armv8pmu_write_evtype(idx, hwc->config_base);
+		if (idx == ARMV8_IDX_CYCLE_COUNTER)
+			write_sysreg(hwc->config_base, pmccfiltr_el0);
+		else
+			armv8pmu_write_evtype(idx, hwc->config_base);
 	}
 }
 
@@ -595,9 +661,10 @@ static void armv8pmu_enable_event(struct perf_event *event)
 	 * Disable counter
 	 */
 	armv8pmu_disable_event_counter(event);
+	isb();
 
 	/*
-	 * Set event (if destined for PMNx counters).
+	 * Set event.
 	 */
 	armv8pmu_write_event_type(event);
 
-- 
2.27.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v5 3/7] arm64: perf: Remove PMU locking
  2020-06-17 11:38 ` Alexandru Elisei
@ 2020-06-17 11:38   ` Alexandru Elisei
  -1 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: maz, will, catalin.marinas, mark.rutland, Julien Thierry,
	Will Deacon, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Alexander Shishkin, Jiri Olsa,
	Namhyung Kim

From: Julien Thierry <julien.thierry@arm.com>

The PMU is disabled and enabled, and the counters are programmed from
contexts where interrupts or preemption is disabled.

The functions to toggle the PMU and to program the PMU counters access the
registers directly and don't access data modified by the interrupt handler.
That, and the fact that they're always called from non-preemptible
contexts, means that we don't need to disable interrupts or use a spinlock.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[Explained why locking is not needed, removed WARN_ONs]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kernel/perf_event.c | 28 ----------------------------
 1 file changed, 28 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index e95b5ca70a53..a6195022be7d 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -647,15 +647,10 @@ static inline u32 armv8pmu_getreset_flags(void)
 
 static void armv8pmu_enable_event(struct perf_event *event)
 {
-	unsigned long flags;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
 	/*
 	 * Enable counter and interrupt, and set the counter to count
 	 * the event that we're interested in.
 	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
 
 	/*
 	 * Disable counter
@@ -678,21 +673,10 @@ static void armv8pmu_enable_event(struct perf_event *event)
 	 * Enable counter
 	 */
 	armv8pmu_enable_event_counter(event);
-
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
 static void armv8pmu_disable_event(struct perf_event *event)
 {
-	unsigned long flags;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	/*
-	 * Disable counter and interrupt
-	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
-
 	/*
 	 * Disable counter
 	 */
@@ -702,30 +686,18 @@ static void armv8pmu_disable_event(struct perf_event *event)
 	 * Disable interrupt for this counter
 	 */
 	armv8pmu_disable_event_irq(event);
-
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
 static void armv8pmu_start(struct arm_pmu *cpu_pmu)
 {
-	unsigned long flags;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
 	/* Enable all counters */
 	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
 static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
 {
-	unsigned long flags;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
 	/* Disable all counters */
 	armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
 static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v5 3/7] arm64: perf: Remove PMU locking
@ 2020-06-17 11:38   ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, Peter Zijlstra, maz, Jiri Olsa,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, will

From: Julien Thierry <julien.thierry@arm.com>

The PMU is disabled and enabled, and the counters are programmed from
contexts where interrupts or preemption is disabled.

The functions to toggle the PMU and to program the PMU counters access the
registers directly and don't access data modified by the interrupt handler.
That, and the fact that they're always called from non-preemptible
contexts, means that we don't need to disable interrupts or use a spinlock.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[Explained why locking is not needed, removed WARN_ONs]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kernel/perf_event.c | 28 ----------------------------
 1 file changed, 28 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index e95b5ca70a53..a6195022be7d 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -647,15 +647,10 @@ static inline u32 armv8pmu_getreset_flags(void)
 
 static void armv8pmu_enable_event(struct perf_event *event)
 {
-	unsigned long flags;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
 	/*
 	 * Enable counter and interrupt, and set the counter to count
 	 * the event that we're interested in.
 	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
 
 	/*
 	 * Disable counter
@@ -678,21 +673,10 @@ static void armv8pmu_enable_event(struct perf_event *event)
 	 * Enable counter
 	 */
 	armv8pmu_enable_event_counter(event);
-
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
 static void armv8pmu_disable_event(struct perf_event *event)
 {
-	unsigned long flags;
-	struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	/*
-	 * Disable counter and interrupt
-	 */
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
-
 	/*
 	 * Disable counter
 	 */
@@ -702,30 +686,18 @@ static void armv8pmu_disable_event(struct perf_event *event)
 	 * Disable interrupt for this counter
 	 */
 	armv8pmu_disable_event_irq(event);
-
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
 static void armv8pmu_start(struct arm_pmu *cpu_pmu)
 {
-	unsigned long flags;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
 	/* Enable all counters */
 	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
 static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
 {
-	unsigned long flags;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
 	/* Disable all counters */
 	armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
 static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
-- 
2.27.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v5 4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK
  2020-06-17 11:38 ` Alexandru Elisei
@ 2020-06-17 11:38   ` Alexandru Elisei
  -1 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: maz, will, catalin.marinas, mark.rutland, Julien Thierry,
	Julien Thierry, Will Deacon, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Alexander Shishkin, Jiri Olsa,
	Namhyung Kim

From: Julien Thierry <julien.thierry@arm.com>

perf_event_overflow() can queue an irq_work on the current PE, which is
executed via an IPI. Move the processing of the irq_work from the PMU IRQ
handler to the IPI handler, which gets executed immediately afterwards.

This also makes the IRQ handler NMI safe, because it removes the call to
irq_work_run().

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[Reworded commit]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kernel/perf_event.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index a6195022be7d..cf1d92030790 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -750,20 +750,16 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
 		if (!armpmu_event_set_period(event))
 			continue;
 
+		/*
+		 * Perf event overflow will queue the processing of the event as
+		 * an irq_work which will be taken care of in the handling of
+		 * IPI_IRQ_WORK.
+		 */
 		if (perf_event_overflow(event, &data, regs))
 			cpu_pmu->disable(event);
 	}
 	armv8pmu_start(cpu_pmu);
 
-	/*
-	 * Handle the pending perf events.
-	 *
-	 * Note: this call *must* be run with interrupts disabled. For
-	 * platforms that can have the PMU interrupts raised as an NMI, this
-	 * will not work.
-	 */
-	irq_work_run();
-
 	return IRQ_HANDLED;
 }
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v5 4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK
@ 2020-06-17 11:38   ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, Peter Zijlstra, maz, Jiri Olsa,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, will, Julien Thierry

From: Julien Thierry <julien.thierry@arm.com>

perf_event_overflow() can queue an irq_work on the current PE, which is
executed via an IPI. Move the processing of the irq_work from the PMU IRQ
handler to the IPI handler, which gets executed immediately afterwards.

This also makes the IRQ handler NMI safe, because it removes the call to
irq_work_run().

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[Reworded commit]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kernel/perf_event.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index a6195022be7d..cf1d92030790 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -750,20 +750,16 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
 		if (!armpmu_event_set_period(event))
 			continue;
 
+		/*
+		 * Perf event overflow will queue the processing of the event as
+		 * an irq_work which will be taken care of in the handling of
+		 * IPI_IRQ_WORK.
+		 */
 		if (perf_event_overflow(event, &data, regs))
 			cpu_pmu->disable(event);
 	}
 	armv8pmu_start(cpu_pmu);
 
-	/*
-	 * Handle the pending perf events.
-	 *
-	 * Note: this call *must* be run with interrupts disabled. For
-	 * platforms that can have the PMU interrupts raised as an NMI, this
-	 * will not work.
-	 */
-	irq_work_run();
-
 	return IRQ_HANDLED;
 }
 
-- 
2.27.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v5 5/7] arm64: kvm: pmu: Make overflow handler NMI safe
  2020-06-17 11:38 ` Alexandru Elisei
  (?)
@ 2020-06-17 11:38   ` Alexandru Elisei
  -1 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: maz, will, catalin.marinas, mark.rutland, Julien Thierry,
	Julien Thierry, Marc Zyngier, James Morse, Suzuki K Pouloze,
	Will Deacon, kvmarm, kvm

From: Julien Thierry <julien.thierry@arm.com>

kvm_vcpu_kick() is not NMI safe. When the overflow handler is called from
NMI context, defer waking the vcpu to an irq_work queue.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Pouloze <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Cc: kvm@vger.kernel.org
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kvm/pmu-emul.c | 25 ++++++++++++++++++++++++-
 include/kvm/arm_pmu.h     |  1 +
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index f0d0312c0a55..30268397ed06 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -433,6 +433,22 @@ void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
 	kvm_pmu_update_state(vcpu);
 }
 
+/**
+ * When perf interrupt is an NMI, we cannot safely notify the vcpu corresponding
+ * to the event.
+ * This is why we need a callback to do it once outside of the NMI context.
+ */
+static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work)
+{
+	struct kvm_vcpu *vcpu;
+	struct kvm_pmu *pmu;
+
+	pmu = container_of(work, struct kvm_pmu, overflow_work);
+	vcpu = kvm_pmc_to_vcpu(&pmu->pmc[0]);
+
+	kvm_vcpu_kick(vcpu);
+}
+
 /**
  * When the perf event overflows, set the overflow status and inform the vcpu.
  */
@@ -465,7 +481,11 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
 
 	if (kvm_pmu_overflow_status(vcpu)) {
 		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
-		kvm_vcpu_kick(vcpu);
+
+		if (!in_nmi())
+			kvm_vcpu_kick(vcpu);
+		else
+			irq_work_queue(&vcpu->arch.pmu.overflow_work);
 	}
 
 	cpu_pmu->pmu.start(perf_event, PERF_EF_RELOAD);
@@ -764,6 +784,9 @@ static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu)
 			return ret;
 	}
 
+	init_irq_work(&vcpu->arch.pmu.overflow_work,
+		      kvm_pmu_perf_overflow_notify_vcpu);
+
 	vcpu->arch.pmu.created = true;
 	return 0;
 }
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 6db030439e29..dbf4f08d42e5 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -27,6 +27,7 @@ struct kvm_pmu {
 	bool ready;
 	bool created;
 	bool irq_level;
+	struct irq_work overflow_work;
 };
 
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v5 5/7] arm64: kvm: pmu: Make overflow handler NMI safe
@ 2020-06-17 11:38   ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: kvm, Marc Zyngier, maz, Will Deacon, catalin.marinas, will, kvmarm

From: Julien Thierry <julien.thierry@arm.com>

kvm_vcpu_kick() is not NMI safe. When the overflow handler is called from
NMI context, defer waking the vcpu to an irq_work queue.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Pouloze <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Cc: kvm@vger.kernel.org
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kvm/pmu-emul.c | 25 ++++++++++++++++++++++++-
 include/kvm/arm_pmu.h     |  1 +
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index f0d0312c0a55..30268397ed06 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -433,6 +433,22 @@ void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
 	kvm_pmu_update_state(vcpu);
 }
 
+/**
+ * When perf interrupt is an NMI, we cannot safely notify the vcpu corresponding
+ * to the event.
+ * This is why we need a callback to do it once outside of the NMI context.
+ */
+static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work)
+{
+	struct kvm_vcpu *vcpu;
+	struct kvm_pmu *pmu;
+
+	pmu = container_of(work, struct kvm_pmu, overflow_work);
+	vcpu = kvm_pmc_to_vcpu(&pmu->pmc[0]);
+
+	kvm_vcpu_kick(vcpu);
+}
+
 /**
  * When the perf event overflows, set the overflow status and inform the vcpu.
  */
@@ -465,7 +481,11 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
 
 	if (kvm_pmu_overflow_status(vcpu)) {
 		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
-		kvm_vcpu_kick(vcpu);
+
+		if (!in_nmi())
+			kvm_vcpu_kick(vcpu);
+		else
+			irq_work_queue(&vcpu->arch.pmu.overflow_work);
 	}
 
 	cpu_pmu->pmu.start(perf_event, PERF_EF_RELOAD);
@@ -764,6 +784,9 @@ static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu)
 			return ret;
 	}
 
+	init_irq_work(&vcpu->arch.pmu.overflow_work,
+		      kvm_pmu_perf_overflow_notify_vcpu);
+
 	vcpu->arch.pmu.created = true;
 	return 0;
 }
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 6db030439e29..dbf4f08d42e5 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -27,6 +27,7 @@ struct kvm_pmu {
 	bool ready;
 	bool created;
 	bool irq_level;
+	struct irq_work overflow_work;
 };
 
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
-- 
2.27.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v5 5/7] arm64: kvm: pmu: Make overflow handler NMI safe
@ 2020-06-17 11:38   ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, kvm, Julien Thierry, Marc Zyngier, maz,
	Suzuki K Pouloze, Will Deacon, James Morse, catalin.marinas,
	will, kvmarm, Julien Thierry

From: Julien Thierry <julien.thierry@arm.com>

kvm_vcpu_kick() is not NMI safe. When the overflow handler is called from
NMI context, defer waking the vcpu to an irq_work queue.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Pouloze <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Cc: kvm@vger.kernel.org
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 arch/arm64/kvm/pmu-emul.c | 25 ++++++++++++++++++++++++-
 include/kvm/arm_pmu.h     |  1 +
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index f0d0312c0a55..30268397ed06 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -433,6 +433,22 @@ void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
 	kvm_pmu_update_state(vcpu);
 }
 
+/**
+ * When perf interrupt is an NMI, we cannot safely notify the vcpu corresponding
+ * to the event.
+ * This is why we need a callback to do it once outside of the NMI context.
+ */
+static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work)
+{
+	struct kvm_vcpu *vcpu;
+	struct kvm_pmu *pmu;
+
+	pmu = container_of(work, struct kvm_pmu, overflow_work);
+	vcpu = kvm_pmc_to_vcpu(&pmu->pmc[0]);
+
+	kvm_vcpu_kick(vcpu);
+}
+
 /**
  * When the perf event overflows, set the overflow status and inform the vcpu.
  */
@@ -465,7 +481,11 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
 
 	if (kvm_pmu_overflow_status(vcpu)) {
 		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
-		kvm_vcpu_kick(vcpu);
+
+		if (!in_nmi())
+			kvm_vcpu_kick(vcpu);
+		else
+			irq_work_queue(&vcpu->arch.pmu.overflow_work);
 	}
 
 	cpu_pmu->pmu.start(perf_event, PERF_EF_RELOAD);
@@ -764,6 +784,9 @@ static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu)
 			return ret;
 	}
 
+	init_irq_work(&vcpu->arch.pmu.overflow_work,
+		      kvm_pmu_perf_overflow_notify_vcpu);
+
 	vcpu->arch.pmu.created = true;
 	return 0;
 }
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 6db030439e29..dbf4f08d42e5 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -27,6 +27,7 @@ struct kvm_pmu {
 	bool ready;
 	bool created;
 	bool irq_level;
+	struct irq_work overflow_work;
 };
 
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
-- 
2.27.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v5 6/7] arm_pmu: Introduce pmu_irq_ops
  2020-06-17 11:38 ` Alexandru Elisei
@ 2020-06-17 11:38   ` Alexandru Elisei
  -1 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: maz, will, catalin.marinas, mark.rutland, Julien Thierry,
	Julien Thierry, Will Deacon

From: Julien Thierry <julien.thierry@arm.com>

Currently the PMU interrupt can either be a normal irq or a percpu irq.
Supporting NMI will introduce two cases for each existing one. It becomes
a mess of 'if's when managing the interrupt.

Define sets of callbacks for operations commonly done on the interrupt. The
appropriate set of callbacks is selected at interrupt request time and
simplifies interrupt enabling/disabling and freeing.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 drivers/perf/arm_pmu.c | 86 ++++++++++++++++++++++++++++++++++--------
 1 file changed, 70 insertions(+), 16 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index df352b334ea7..17e5952d21e4 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -26,8 +26,46 @@
 
 #include <asm/irq_regs.h>
 
+static int armpmu_count_irq_users(const int irq);
+
+struct pmu_irq_ops {
+	void (*enable_pmuirq)(unsigned int irq);
+	void (*disable_pmuirq)(unsigned int irq);
+	void (*free_pmuirq)(unsigned int irq, int cpu, void __percpu *devid);
+};
+
+static void armpmu_free_pmuirq(unsigned int irq, int cpu, void __percpu *devid)
+{
+	free_irq(irq, per_cpu_ptr(devid, cpu));
+}
+
+static const struct pmu_irq_ops pmuirq_ops = {
+	.enable_pmuirq = enable_irq,
+	.disable_pmuirq = disable_irq_nosync,
+	.free_pmuirq = armpmu_free_pmuirq
+};
+
+static void armpmu_enable_percpu_pmuirq(unsigned int irq)
+{
+	enable_percpu_irq(irq, IRQ_TYPE_NONE);
+}
+
+static void armpmu_free_percpu_pmuirq(unsigned int irq, int cpu,
+				   void __percpu *devid)
+{
+	if (armpmu_count_irq_users(irq) == 1)
+		free_percpu_irq(irq, devid);
+}
+
+static const struct pmu_irq_ops percpu_pmuirq_ops = {
+	.enable_pmuirq = armpmu_enable_percpu_pmuirq,
+	.disable_pmuirq = disable_percpu_irq,
+	.free_pmuirq = armpmu_free_percpu_pmuirq
+};
+
 static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
 static DEFINE_PER_CPU(int, cpu_irq);
+static DEFINE_PER_CPU(const struct pmu_irq_ops *, cpu_irq_ops);
 
 static inline u64 arm_pmu_event_max_period(struct perf_event *event)
 {
@@ -544,6 +582,19 @@ static int armpmu_count_irq_users(const int irq)
 	return count;
 }
 
+static const struct pmu_irq_ops *armpmu_find_irq_ops(int irq)
+{
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		if (per_cpu(cpu_irq, cpu) == irq
+		    && per_cpu(cpu_irq_ops, cpu))
+			return per_cpu(cpu_irq_ops, cpu);
+	}
+
+	return NULL;
+}
+
 void armpmu_free_irq(int irq, int cpu)
 {
 	if (per_cpu(cpu_irq, cpu) == 0)
@@ -551,18 +602,18 @@ void armpmu_free_irq(int irq, int cpu)
 	if (WARN_ON(irq != per_cpu(cpu_irq, cpu)))
 		return;
 
-	if (!irq_is_percpu_devid(irq))
-		free_irq(irq, per_cpu_ptr(&cpu_armpmu, cpu));
-	else if (armpmu_count_irq_users(irq) == 1)
-		free_percpu_irq(irq, &cpu_armpmu);
+	per_cpu(cpu_irq_ops, cpu)->free_pmuirq(irq, cpu, &cpu_armpmu);
 
 	per_cpu(cpu_irq, cpu) = 0;
+	per_cpu(cpu_irq_ops, cpu) = NULL;
 }
 
 int armpmu_request_irq(int irq, int cpu)
 {
 	int err = 0;
 	const irq_handler_t handler = armpmu_dispatch_irq;
+	const struct pmu_irq_ops *irq_ops;
+
 	if (!irq)
 		return 0;
 
@@ -584,15 +635,26 @@ int armpmu_request_irq(int irq, int cpu)
 		irq_set_status_flags(irq, IRQ_NOAUTOEN);
 		err = request_irq(irq, handler, irq_flags, "arm-pmu",
 				  per_cpu_ptr(&cpu_armpmu, cpu));
+
+		irq_ops = &pmuirq_ops;
 	} else if (armpmu_count_irq_users(irq) == 0) {
 		err = request_percpu_irq(irq, handler, "arm-pmu",
 					 &cpu_armpmu);
+
+		irq_ops = &percpu_pmuirq_ops;
+	} else {
+		/* Per cpudevid irq was already requested by another CPU */
+		irq_ops = armpmu_find_irq_ops(irq);
+
+		if (WARN_ON(!irq_ops))
+			err = -EINVAL;
 	}
 
 	if (err)
 		goto err_out;
 
 	per_cpu(cpu_irq, cpu) = irq;
+	per_cpu(cpu_irq_ops, cpu) = irq_ops;
 	return 0;
 
 err_out:
@@ -625,12 +687,8 @@ static int arm_perf_starting_cpu(unsigned int cpu, struct hlist_node *node)
 	per_cpu(cpu_armpmu, cpu) = pmu;
 
 	irq = armpmu_get_cpu_irq(pmu, cpu);
-	if (irq) {
-		if (irq_is_percpu_devid(irq))
-			enable_percpu_irq(irq, IRQ_TYPE_NONE);
-		else
-			enable_irq(irq);
-	}
+	if (irq)
+		per_cpu(cpu_irq_ops, cpu)->enable_pmuirq(irq);
 
 	return 0;
 }
@@ -644,12 +702,8 @@ static int arm_perf_teardown_cpu(unsigned int cpu, struct hlist_node *node)
 		return 0;
 
 	irq = armpmu_get_cpu_irq(pmu, cpu);
-	if (irq) {
-		if (irq_is_percpu_devid(irq))
-			disable_percpu_irq(irq);
-		else
-			disable_irq_nosync(irq);
-	}
+	if (irq)
+		per_cpu(cpu_irq_ops, cpu)->disable_pmuirq(irq);
 
 	per_cpu(cpu_armpmu, cpu) = NULL;
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v5 6/7] arm_pmu: Introduce pmu_irq_ops
@ 2020-06-17 11:38   ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, maz, Will Deacon, catalin.marinas,
	will, Julien Thierry

From: Julien Thierry <julien.thierry@arm.com>

Currently the PMU interrupt can either be a normal irq or a percpu irq.
Supporting NMI will introduce two cases for each existing one. It becomes
a mess of 'if's when managing the interrupt.

Define sets of callbacks for operations commonly done on the interrupt. The
appropriate set of callbacks is selected at interrupt request time and
simplifies interrupt enabling/disabling and freeing.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 drivers/perf/arm_pmu.c | 86 ++++++++++++++++++++++++++++++++++--------
 1 file changed, 70 insertions(+), 16 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index df352b334ea7..17e5952d21e4 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -26,8 +26,46 @@
 
 #include <asm/irq_regs.h>
 
+static int armpmu_count_irq_users(const int irq);
+
+struct pmu_irq_ops {
+	void (*enable_pmuirq)(unsigned int irq);
+	void (*disable_pmuirq)(unsigned int irq);
+	void (*free_pmuirq)(unsigned int irq, int cpu, void __percpu *devid);
+};
+
+static void armpmu_free_pmuirq(unsigned int irq, int cpu, void __percpu *devid)
+{
+	free_irq(irq, per_cpu_ptr(devid, cpu));
+}
+
+static const struct pmu_irq_ops pmuirq_ops = {
+	.enable_pmuirq = enable_irq,
+	.disable_pmuirq = disable_irq_nosync,
+	.free_pmuirq = armpmu_free_pmuirq
+};
+
+static void armpmu_enable_percpu_pmuirq(unsigned int irq)
+{
+	enable_percpu_irq(irq, IRQ_TYPE_NONE);
+}
+
+static void armpmu_free_percpu_pmuirq(unsigned int irq, int cpu,
+				   void __percpu *devid)
+{
+	if (armpmu_count_irq_users(irq) == 1)
+		free_percpu_irq(irq, devid);
+}
+
+static const struct pmu_irq_ops percpu_pmuirq_ops = {
+	.enable_pmuirq = armpmu_enable_percpu_pmuirq,
+	.disable_pmuirq = disable_percpu_irq,
+	.free_pmuirq = armpmu_free_percpu_pmuirq
+};
+
 static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
 static DEFINE_PER_CPU(int, cpu_irq);
+static DEFINE_PER_CPU(const struct pmu_irq_ops *, cpu_irq_ops);
 
 static inline u64 arm_pmu_event_max_period(struct perf_event *event)
 {
@@ -544,6 +582,19 @@ static int armpmu_count_irq_users(const int irq)
 	return count;
 }
 
+static const struct pmu_irq_ops *armpmu_find_irq_ops(int irq)
+{
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		if (per_cpu(cpu_irq, cpu) == irq
+		    && per_cpu(cpu_irq_ops, cpu))
+			return per_cpu(cpu_irq_ops, cpu);
+	}
+
+	return NULL;
+}
+
 void armpmu_free_irq(int irq, int cpu)
 {
 	if (per_cpu(cpu_irq, cpu) == 0)
@@ -551,18 +602,18 @@ void armpmu_free_irq(int irq, int cpu)
 	if (WARN_ON(irq != per_cpu(cpu_irq, cpu)))
 		return;
 
-	if (!irq_is_percpu_devid(irq))
-		free_irq(irq, per_cpu_ptr(&cpu_armpmu, cpu));
-	else if (armpmu_count_irq_users(irq) == 1)
-		free_percpu_irq(irq, &cpu_armpmu);
+	per_cpu(cpu_irq_ops, cpu)->free_pmuirq(irq, cpu, &cpu_armpmu);
 
 	per_cpu(cpu_irq, cpu) = 0;
+	per_cpu(cpu_irq_ops, cpu) = NULL;
 }
 
 int armpmu_request_irq(int irq, int cpu)
 {
 	int err = 0;
 	const irq_handler_t handler = armpmu_dispatch_irq;
+	const struct pmu_irq_ops *irq_ops;
+
 	if (!irq)
 		return 0;
 
@@ -584,15 +635,26 @@ int armpmu_request_irq(int irq, int cpu)
 		irq_set_status_flags(irq, IRQ_NOAUTOEN);
 		err = request_irq(irq, handler, irq_flags, "arm-pmu",
 				  per_cpu_ptr(&cpu_armpmu, cpu));
+
+		irq_ops = &pmuirq_ops;
 	} else if (armpmu_count_irq_users(irq) == 0) {
 		err = request_percpu_irq(irq, handler, "arm-pmu",
 					 &cpu_armpmu);
+
+		irq_ops = &percpu_pmuirq_ops;
+	} else {
+		/* Per cpudevid irq was already requested by another CPU */
+		irq_ops = armpmu_find_irq_ops(irq);
+
+		if (WARN_ON(!irq_ops))
+			err = -EINVAL;
 	}
 
 	if (err)
 		goto err_out;
 
 	per_cpu(cpu_irq, cpu) = irq;
+	per_cpu(cpu_irq_ops, cpu) = irq_ops;
 	return 0;
 
 err_out:
@@ -625,12 +687,8 @@ static int arm_perf_starting_cpu(unsigned int cpu, struct hlist_node *node)
 	per_cpu(cpu_armpmu, cpu) = pmu;
 
 	irq = armpmu_get_cpu_irq(pmu, cpu);
-	if (irq) {
-		if (irq_is_percpu_devid(irq))
-			enable_percpu_irq(irq, IRQ_TYPE_NONE);
-		else
-			enable_irq(irq);
-	}
+	if (irq)
+		per_cpu(cpu_irq_ops, cpu)->enable_pmuirq(irq);
 
 	return 0;
 }
@@ -644,12 +702,8 @@ static int arm_perf_teardown_cpu(unsigned int cpu, struct hlist_node *node)
 		return 0;
 
 	irq = armpmu_get_cpu_irq(pmu, cpu);
-	if (irq) {
-		if (irq_is_percpu_devid(irq))
-			disable_percpu_irq(irq);
-		else
-			disable_irq_nosync(irq);
-	}
+	if (irq)
+		per_cpu(cpu_irq_ops, cpu)->disable_pmuirq(irq);
 
 	per_cpu(cpu_armpmu, cpu) = NULL;
 
-- 
2.27.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v5 7/7] arm_pmu: arm64: Use NMIs for PMU
  2020-06-17 11:38 ` Alexandru Elisei
@ 2020-06-17 11:38   ` Alexandru Elisei
  -1 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: maz, will, catalin.marinas, mark.rutland, Julien Thierry,
	Julien Thierry, Will Deacon

From: Julien Thierry <julien.thierry@arm.com>

Add required PMU interrupt operations for NMIs. Request interrupt lines as
NMIs when possible, otherwise fall back to normal interrupts.

NMIs are only supported on the arm64 architecture with a GICv3 irqchip.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[Added that NMIs only work on arm64 + GICv3]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 drivers/perf/arm_pmu.c | 62 ++++++++++++++++++++++++++++++++++++++----
 1 file changed, 56 insertions(+), 6 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 17e5952d21e4..dd9d7f61ee29 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -45,6 +45,17 @@ static const struct pmu_irq_ops pmuirq_ops = {
 	.free_pmuirq = armpmu_free_pmuirq
 };
 
+static void armpmu_free_pmunmi(unsigned int irq, int cpu, void __percpu *devid)
+{
+	free_nmi(irq, per_cpu_ptr(devid, cpu));
+}
+
+static const struct pmu_irq_ops pmunmi_ops = {
+	.enable_pmuirq = enable_nmi,
+	.disable_pmuirq = disable_nmi_nosync,
+	.free_pmuirq = armpmu_free_pmunmi
+};
+
 static void armpmu_enable_percpu_pmuirq(unsigned int irq)
 {
 	enable_percpu_irq(irq, IRQ_TYPE_NONE);
@@ -63,6 +74,31 @@ static const struct pmu_irq_ops percpu_pmuirq_ops = {
 	.free_pmuirq = armpmu_free_percpu_pmuirq
 };
 
+static void armpmu_enable_percpu_pmunmi(unsigned int irq)
+{
+	if (!prepare_percpu_nmi(irq))
+		enable_percpu_nmi(irq, IRQ_TYPE_NONE);
+}
+
+static void armpmu_disable_percpu_pmunmi(unsigned int irq)
+{
+	disable_percpu_nmi(irq);
+	teardown_percpu_nmi(irq);
+}
+
+static void armpmu_free_percpu_pmunmi(unsigned int irq, int cpu,
+				      void __percpu *devid)
+{
+	if (armpmu_count_irq_users(irq) == 1)
+		free_percpu_nmi(irq, devid);
+}
+
+static const struct pmu_irq_ops percpu_pmunmi_ops = {
+	.enable_pmuirq = armpmu_enable_percpu_pmunmi,
+	.disable_pmuirq = armpmu_disable_percpu_pmunmi,
+	.free_pmuirq = armpmu_free_percpu_pmunmi
+};
+
 static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
 static DEFINE_PER_CPU(int, cpu_irq);
 static DEFINE_PER_CPU(const struct pmu_irq_ops *, cpu_irq_ops);
@@ -633,15 +669,29 @@ int armpmu_request_irq(int irq, int cpu)
 			    IRQF_NO_THREAD;
 
 		irq_set_status_flags(irq, IRQ_NOAUTOEN);
-		err = request_irq(irq, handler, irq_flags, "arm-pmu",
+
+		err = request_nmi(irq, handler, irq_flags, "arm-pmu",
 				  per_cpu_ptr(&cpu_armpmu, cpu));
 
-		irq_ops = &pmuirq_ops;
+		/* If cannot get an NMI, get a normal interrupt */
+		if (err) {
+			err = request_irq(irq, handler, irq_flags, "arm-pmu",
+					  per_cpu_ptr(&cpu_armpmu, cpu));
+			irq_ops = &pmuirq_ops;
+		} else {
+			irq_ops = &pmunmi_ops;
+		}
 	} else if (armpmu_count_irq_users(irq) == 0) {
-		err = request_percpu_irq(irq, handler, "arm-pmu",
-					 &cpu_armpmu);
-
-		irq_ops = &percpu_pmuirq_ops;
+		err = request_percpu_nmi(irq, handler, "arm-pmu", &cpu_armpmu);
+
+		/* If cannot get an NMI, get a normal interrupt */
+		if (err) {
+			err = request_percpu_irq(irq, handler, "arm-pmu",
+						 &cpu_armpmu);
+			irq_ops = &percpu_pmuirq_ops;
+		} else {
+			irq_ops = &percpu_pmunmi_ops;
+		}
 	} else {
 		/* Per cpudevid irq was already requested by another CPU */
 		irq_ops = armpmu_find_irq_ops(irq);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v5 7/7] arm_pmu: arm64: Use NMIs for PMU
@ 2020-06-17 11:38   ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-17 11:38 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, maz, Will Deacon, catalin.marinas,
	will, Julien Thierry

From: Julien Thierry <julien.thierry@arm.com>

Add required PMU interrupt operations for NMIs. Request interrupt lines as
NMIs when possible, otherwise fall back to normal interrupts.

NMIs are only supported on the arm64 architecture with a GICv3 irqchip.

Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[Added that NMIs only work on arm64 + GICv3]
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
---
 drivers/perf/arm_pmu.c | 62 ++++++++++++++++++++++++++++++++++++++----
 1 file changed, 56 insertions(+), 6 deletions(-)

diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 17e5952d21e4..dd9d7f61ee29 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -45,6 +45,17 @@ static const struct pmu_irq_ops pmuirq_ops = {
 	.free_pmuirq = armpmu_free_pmuirq
 };
 
+static void armpmu_free_pmunmi(unsigned int irq, int cpu, void __percpu *devid)
+{
+	free_nmi(irq, per_cpu_ptr(devid, cpu));
+}
+
+static const struct pmu_irq_ops pmunmi_ops = {
+	.enable_pmuirq = enable_nmi,
+	.disable_pmuirq = disable_nmi_nosync,
+	.free_pmuirq = armpmu_free_pmunmi
+};
+
 static void armpmu_enable_percpu_pmuirq(unsigned int irq)
 {
 	enable_percpu_irq(irq, IRQ_TYPE_NONE);
@@ -63,6 +74,31 @@ static const struct pmu_irq_ops percpu_pmuirq_ops = {
 	.free_pmuirq = armpmu_free_percpu_pmuirq
 };
 
+static void armpmu_enable_percpu_pmunmi(unsigned int irq)
+{
+	if (!prepare_percpu_nmi(irq))
+		enable_percpu_nmi(irq, IRQ_TYPE_NONE);
+}
+
+static void armpmu_disable_percpu_pmunmi(unsigned int irq)
+{
+	disable_percpu_nmi(irq);
+	teardown_percpu_nmi(irq);
+}
+
+static void armpmu_free_percpu_pmunmi(unsigned int irq, int cpu,
+				      void __percpu *devid)
+{
+	if (armpmu_count_irq_users(irq) == 1)
+		free_percpu_nmi(irq, devid);
+}
+
+static const struct pmu_irq_ops percpu_pmunmi_ops = {
+	.enable_pmuirq = armpmu_enable_percpu_pmunmi,
+	.disable_pmuirq = armpmu_disable_percpu_pmunmi,
+	.free_pmuirq = armpmu_free_percpu_pmunmi
+};
+
 static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
 static DEFINE_PER_CPU(int, cpu_irq);
 static DEFINE_PER_CPU(const struct pmu_irq_ops *, cpu_irq_ops);
@@ -633,15 +669,29 @@ int armpmu_request_irq(int irq, int cpu)
 			    IRQF_NO_THREAD;
 
 		irq_set_status_flags(irq, IRQ_NOAUTOEN);
-		err = request_irq(irq, handler, irq_flags, "arm-pmu",
+
+		err = request_nmi(irq, handler, irq_flags, "arm-pmu",
 				  per_cpu_ptr(&cpu_armpmu, cpu));
 
-		irq_ops = &pmuirq_ops;
+		/* If cannot get an NMI, get a normal interrupt */
+		if (err) {
+			err = request_irq(irq, handler, irq_flags, "arm-pmu",
+					  per_cpu_ptr(&cpu_armpmu, cpu));
+			irq_ops = &pmuirq_ops;
+		} else {
+			irq_ops = &pmunmi_ops;
+		}
 	} else if (armpmu_count_irq_users(irq) == 0) {
-		err = request_percpu_irq(irq, handler, "arm-pmu",
-					 &cpu_armpmu);
-
-		irq_ops = &percpu_pmuirq_ops;
+		err = request_percpu_nmi(irq, handler, "arm-pmu", &cpu_armpmu);
+
+		/* If cannot get an NMI, get a normal interrupt */
+		if (err) {
+			err = request_percpu_irq(irq, handler, "arm-pmu",
+						 &cpu_armpmu);
+			irq_ops = &percpu_pmuirq_ops;
+		} else {
+			irq_ops = &percpu_pmunmi_ops;
+		}
 	} else {
 		/* Per cpudevid irq was already requested by another CPU */
 		irq_ops = armpmu_find_irq_ops(irq);
-- 
2.27.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_event()
  2020-06-17 11:38   ` Alexandru Elisei
@ 2020-06-17 20:01     ` Stephen Boyd
  -1 siblings, 0 replies; 45+ messages in thread
From: Stephen Boyd @ 2020-06-17 20:01 UTC (permalink / raw)
  To: Alexandru Elisei, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, Peter Zijlstra, maz, Jiri Olsa,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, will, Julien Thierry

Quoting Alexandru Elisei (2020-06-17 04:38:45)
> Writes to the PMXEVTYPER_EL0 register are not self-synchronising. In
> armv8pmu_enable_event(), the PE can reorder configuring the event type
> after we have enabled the counter and the interrupt. This can lead to an
> interrupt being asserted because the of the previous event type that we

'because the of the' doesn't read properly.

> were counting, not the one that we've just enabled.
> 
> The same rationale applies to writes to the PMINTENSET_EL1 register. The PE
> can reorder enabling the interrupt at any point in the future after we have
> enabled the event.
> 
> Prevent both situations from happening by adding an ISB just before we
> enable the event counter.
> 
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index 4d7879484cec..ee180b2a5b39 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -605,6 +605,7 @@ static void armv8pmu_enable_event(struct perf_event *event)
>          * Enable interrupt for this counter
>          */
>         armv8pmu_enable_event_irq(event);
> +       isb();

Please add a comment before the isb() explaining the situation. Nobody
knows what this is for when reading the code and they don't want to do
git archaeology to figure it out.

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_event()
@ 2020-06-17 20:01     ` Stephen Boyd
  0 siblings, 0 replies; 45+ messages in thread
From: Stephen Boyd @ 2020-06-17 20:01 UTC (permalink / raw)
  To: Alexandru Elisei, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, will, Julien Thierry, Peter Zijlstra, maz,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, Jiri Olsa,
	Julien Thierry

Quoting Alexandru Elisei (2020-06-17 04:38:45)
> Writes to the PMXEVTYPER_EL0 register are not self-synchronising. In
> armv8pmu_enable_event(), the PE can reorder configuring the event type
> after we have enabled the counter and the interrupt. This can lead to an
> interrupt being asserted because the of the previous event type that we

'because the of the' doesn't read properly.

> were counting, not the one that we've just enabled.
> 
> The same rationale applies to writes to the PMINTENSET_EL1 register. The PE
> can reorder enabling the interrupt at any point in the future after we have
> enabled the event.
> 
> Prevent both situations from happening by adding an ISB just before we
> enable the event counter.
> 
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index 4d7879484cec..ee180b2a5b39 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -605,6 +605,7 @@ static void armv8pmu_enable_event(struct perf_event *event)
>          * Enable interrupt for this counter
>          */
>         armv8pmu_enable_event_irq(event);
> +       isb();

Please add a comment before the isb() explaining the situation. Nobody
knows what this is for when reading the code and they don't want to do
git archaeology to figure it out.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 2/7] arm64: perf: Avoid PMXEV* indirection
  2020-06-17 11:38   ` Alexandru Elisei
@ 2020-06-17 20:11     ` Stephen Boyd
  -1 siblings, 0 replies; 45+ messages in thread
From: Stephen Boyd @ 2020-06-17 20:11 UTC (permalink / raw)
  To: Alexandru Elisei, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, Peter Zijlstra, maz, Jiri Olsa,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, will, Julien Thierry

Quoting Alexandru Elisei (2020-06-17 04:38:46)
> From: Mark Rutland <mark.rutland@arm.com>
> 
> Currently we access the counter registers and their respective type
> registers indirectly. This requires us to write to PMSELR, issue an ISB,
> then access the relevant PMXEV* registers.
> 
> This is unfortunate, because:
> 
> * Under virtualization, accessing one registers requires two traps to

one register? Not plural presumably.

>   the hypervisor, even though we could access the register directly with
>   a single trap.
> 
> * We have to issue an ISB which we could otherwise avoid the cost of.
> 
> * When we use NMIs, the NMI handler will have to save/restore the select
>   register in case the code it preempted was attempting to access a
>   counter or its type register.
> 
> We can avoid these issues by directly accessing the relevant registers.
> This patch adds helpers to do so.
> 
> In armv8pmu_enable_event() we still need the ISB to prevent the PE from
> reordering the write to PMINTENSET_EL1 register. If the interrupt is
> enabled before we disable the counter and the new event is configured,
> we might get an interrupt triggered by the previously programmed event
> overflowing, but which we wrongly attribute to the event that we are
> enabling.
> 
> In the process, remove the comment that refers to the ARMv7 PMU.
> 
> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> [Julien T.: Don't inline read/write functions to avoid big code-size
>         increase, remove unused read_pmevtypern function,
>         fix counter index issue.]
> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
> [Removed comment, removed trailing semicolons in macros, added ISB]
> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
> ---
>  arch/arm64/kernel/perf_event.c | 95 +++++++++++++++++++++++++++++-----
>  1 file changed, 81 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index ee180b2a5b39..e95b5ca70a53 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -323,6 +323,73 @@ static inline bool armv8pmu_event_is_chained(struct perf_event *event)
>  #define        ARMV8_IDX_TO_COUNTER(x) \
>         (((x) - ARMV8_IDX_COUNTER0) & ARMV8_PMU_COUNTER_MASK)
>  
> +/*
> + * This code is really good
> + */

Superb!

> +
> +#define PMEVN_CASE(n, case_macro) \
> +       case n: case_macro(n); break
> +
> +#define PMEVN_SWITCH(x, case_macro)                            \
> +       do {                                                    \
> +               switch (x) {                                    \
> +               PMEVN_CASE(0,  case_macro);                     \
> +               PMEVN_CASE(1,  case_macro);                     \
> +               PMEVN_CASE(2,  case_macro);                     \
> +               PMEVN_CASE(3,  case_macro);                     \
> +               PMEVN_CASE(4,  case_macro);                     \
> +               PMEVN_CASE(5,  case_macro);                     \
> +               PMEVN_CASE(6,  case_macro);                     \
> +               PMEVN_CASE(7,  case_macro);                     \
> +               PMEVN_CASE(8,  case_macro);                     \
> +               PMEVN_CASE(9,  case_macro);                     \
> +               PMEVN_CASE(10, case_macro);                     \
> +               PMEVN_CASE(11, case_macro);                     \
> +               PMEVN_CASE(12, case_macro);                     \
> +               PMEVN_CASE(13, case_macro);                     \
> +               PMEVN_CASE(14, case_macro);                     \
> +               PMEVN_CASE(15, case_macro);                     \
> +               PMEVN_CASE(16, case_macro);                     \
> +               PMEVN_CASE(17, case_macro);                     \
> +               PMEVN_CASE(18, case_macro);                     \
> +               PMEVN_CASE(19, case_macro);                     \
> +               PMEVN_CASE(20, case_macro);                     \
> +               PMEVN_CASE(21, case_macro);                     \
> +               PMEVN_CASE(22, case_macro);                     \
> +               PMEVN_CASE(23, case_macro);                     \
> +               PMEVN_CASE(24, case_macro);                     \
> +               PMEVN_CASE(25, case_macro);                     \
> +               PMEVN_CASE(26, case_macro);                     \
> +               PMEVN_CASE(27, case_macro);                     \
> +               PMEVN_CASE(28, case_macro);                     \
> +               PMEVN_CASE(29, case_macro);                     \
> +               PMEVN_CASE(30, case_macro);                     \
> +               default: WARN(1, "Invalid PMEV* index");        \

Missing newline on that WARN message?

> +               }                                               \
> +       } while (0)
> +
> +#define RETURN_READ_PMEVCNTRN(n) \
> +       return read_sysreg(pmevcntr##n##_el0)
> +static unsigned long read_pmevcntrn(int n)
> +{
> +       PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
> +       return 0;
> +}
> +
> +#define WRITE_PMEVCNTRN(n) \
> +       write_sysreg(val, pmevcntr##n##_el0)
> +static void write_pmevcntrn(int n, unsigned long val)
> +{
> +       PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
> +}
> +
> +#define WRITE_PMEVTYPERN(n) \
> +       write_sysreg(val, pmevtyper##n##_el0)
> +static void write_pmevtypern(int n, unsigned long val)
> +{
> +       PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
> +}
> +
>  static inline u32 armv8pmu_pmcr_read(void)
>  {
>         return read_sysreg(pmcr_el0);
> @@ -351,17 +418,11 @@ static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx)
>         return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx));
>  }
>  
> -static inline void armv8pmu_select_counter(int idx)
> +static inline u32 armv8pmu_read_evcntr(int idx)
>  {
>         u32 counter = ARMV8_IDX_TO_COUNTER(idx);
> -       write_sysreg(counter, pmselr_el0);
> -       isb();
> -}
>  
> -static inline u64 armv8pmu_read_evcntr(int idx)
> -{
> -       armv8pmu_select_counter(idx);
> -       return read_sysreg(pmxevcntr_el0);
> +       return read_pmevcntrn(counter);
>  }
>  
>  static inline u64 armv8pmu_read_hw_counter(struct perf_event *event)
> @@ -433,8 +494,9 @@ static u64 armv8pmu_read_counter(struct perf_event *event)
>  
>  static inline void armv8pmu_write_evcntr(int idx, u64 value)
>  {
> -       armv8pmu_select_counter(idx);
> -       write_sysreg(value, pmxevcntr_el0);
> +       u32 counter = ARMV8_IDX_TO_COUNTER(idx);

Might be a good idea to make ARMV8_IDX_TO_COUNTER a static inline
function that has a return type of u32. I had to go check the code to
make sure it wasn't something larger.

> +
> +       write_pmevcntrn(counter, value);
>  }
>  
>  static inline void armv8pmu_write_hw_counter(struct perf_event *event,
> @@ -469,9 +531,10 @@ static void armv8pmu_write_counter(struct perf_event *event, u64 value)
>  
>  static inline void armv8pmu_write_evtype(int idx, u32 val)
>  {
> -       armv8pmu_select_counter(idx);
> +       u32 counter = ARMV8_IDX_TO_COUNTER(idx);
> +
>         val &= ARMV8_PMU_EVTYPE_MASK;
> -       write_sysreg(val, pmxevtyper_el0);
> +       write_pmevtypern(counter, val);
>  }
>  
>  static inline void armv8pmu_write_event_type(struct perf_event *event)
> @@ -491,7 +554,10 @@ static inline void armv8pmu_write_event_type(struct perf_event *event)
>                 armv8pmu_write_evtype(idx - 1, hwc->config_base);
>                 armv8pmu_write_evtype(idx, chain_evt);
>         } else {
> -               armv8pmu_write_evtype(idx, hwc->config_base);
> +               if (idx == ARMV8_IDX_CYCLE_COUNTER)
> +                       write_sysreg(hwc->config_base, pmccfiltr_el0);
> +               else
> +                       armv8pmu_write_evtype(idx, hwc->config_base);
>         }
>  }
>  
> @@ -595,9 +661,10 @@ static void armv8pmu_enable_event(struct perf_event *event)
>          * Disable counter
>          */
>         armv8pmu_disable_event_counter(event);
> +       isb();

Same comment about uncommented isb().

>  
>         /*
> -        * Set event (if destined for PMNx counters).
> +        * Set event.
>          */
>         armv8pmu_write_event_type(event);
>

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 2/7] arm64: perf: Avoid PMXEV* indirection
@ 2020-06-17 20:11     ` Stephen Boyd
  0 siblings, 0 replies; 45+ messages in thread
From: Stephen Boyd @ 2020-06-17 20:11 UTC (permalink / raw)
  To: Alexandru Elisei, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, will, Julien Thierry, Peter Zijlstra, maz,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, Jiri Olsa,
	Julien Thierry

Quoting Alexandru Elisei (2020-06-17 04:38:46)
> From: Mark Rutland <mark.rutland@arm.com>
> 
> Currently we access the counter registers and their respective type
> registers indirectly. This requires us to write to PMSELR, issue an ISB,
> then access the relevant PMXEV* registers.
> 
> This is unfortunate, because:
> 
> * Under virtualization, accessing one registers requires two traps to

one register? Not plural presumably.

>   the hypervisor, even though we could access the register directly with
>   a single trap.
> 
> * We have to issue an ISB which we could otherwise avoid the cost of.
> 
> * When we use NMIs, the NMI handler will have to save/restore the select
>   register in case the code it preempted was attempting to access a
>   counter or its type register.
> 
> We can avoid these issues by directly accessing the relevant registers.
> This patch adds helpers to do so.
> 
> In armv8pmu_enable_event() we still need the ISB to prevent the PE from
> reordering the write to PMINTENSET_EL1 register. If the interrupt is
> enabled before we disable the counter and the new event is configured,
> we might get an interrupt triggered by the previously programmed event
> overflowing, but which we wrongly attribute to the event that we are
> enabling.
> 
> In the process, remove the comment that refers to the ARMv7 PMU.
> 
> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> [Julien T.: Don't inline read/write functions to avoid big code-size
>         increase, remove unused read_pmevtypern function,
>         fix counter index issue.]
> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
> [Removed comment, removed trailing semicolons in macros, added ISB]
> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
> ---
>  arch/arm64/kernel/perf_event.c | 95 +++++++++++++++++++++++++++++-----
>  1 file changed, 81 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index ee180b2a5b39..e95b5ca70a53 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -323,6 +323,73 @@ static inline bool armv8pmu_event_is_chained(struct perf_event *event)
>  #define        ARMV8_IDX_TO_COUNTER(x) \
>         (((x) - ARMV8_IDX_COUNTER0) & ARMV8_PMU_COUNTER_MASK)
>  
> +/*
> + * This code is really good
> + */

Superb!

> +
> +#define PMEVN_CASE(n, case_macro) \
> +       case n: case_macro(n); break
> +
> +#define PMEVN_SWITCH(x, case_macro)                            \
> +       do {                                                    \
> +               switch (x) {                                    \
> +               PMEVN_CASE(0,  case_macro);                     \
> +               PMEVN_CASE(1,  case_macro);                     \
> +               PMEVN_CASE(2,  case_macro);                     \
> +               PMEVN_CASE(3,  case_macro);                     \
> +               PMEVN_CASE(4,  case_macro);                     \
> +               PMEVN_CASE(5,  case_macro);                     \
> +               PMEVN_CASE(6,  case_macro);                     \
> +               PMEVN_CASE(7,  case_macro);                     \
> +               PMEVN_CASE(8,  case_macro);                     \
> +               PMEVN_CASE(9,  case_macro);                     \
> +               PMEVN_CASE(10, case_macro);                     \
> +               PMEVN_CASE(11, case_macro);                     \
> +               PMEVN_CASE(12, case_macro);                     \
> +               PMEVN_CASE(13, case_macro);                     \
> +               PMEVN_CASE(14, case_macro);                     \
> +               PMEVN_CASE(15, case_macro);                     \
> +               PMEVN_CASE(16, case_macro);                     \
> +               PMEVN_CASE(17, case_macro);                     \
> +               PMEVN_CASE(18, case_macro);                     \
> +               PMEVN_CASE(19, case_macro);                     \
> +               PMEVN_CASE(20, case_macro);                     \
> +               PMEVN_CASE(21, case_macro);                     \
> +               PMEVN_CASE(22, case_macro);                     \
> +               PMEVN_CASE(23, case_macro);                     \
> +               PMEVN_CASE(24, case_macro);                     \
> +               PMEVN_CASE(25, case_macro);                     \
> +               PMEVN_CASE(26, case_macro);                     \
> +               PMEVN_CASE(27, case_macro);                     \
> +               PMEVN_CASE(28, case_macro);                     \
> +               PMEVN_CASE(29, case_macro);                     \
> +               PMEVN_CASE(30, case_macro);                     \
> +               default: WARN(1, "Invalid PMEV* index");        \

Missing newline on that WARN message?

> +               }                                               \
> +       } while (0)
> +
> +#define RETURN_READ_PMEVCNTRN(n) \
> +       return read_sysreg(pmevcntr##n##_el0)
> +static unsigned long read_pmevcntrn(int n)
> +{
> +       PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
> +       return 0;
> +}
> +
> +#define WRITE_PMEVCNTRN(n) \
> +       write_sysreg(val, pmevcntr##n##_el0)
> +static void write_pmevcntrn(int n, unsigned long val)
> +{
> +       PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
> +}
> +
> +#define WRITE_PMEVTYPERN(n) \
> +       write_sysreg(val, pmevtyper##n##_el0)
> +static void write_pmevtypern(int n, unsigned long val)
> +{
> +       PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
> +}
> +
>  static inline u32 armv8pmu_pmcr_read(void)
>  {
>         return read_sysreg(pmcr_el0);
> @@ -351,17 +418,11 @@ static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx)
>         return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx));
>  }
>  
> -static inline void armv8pmu_select_counter(int idx)
> +static inline u32 armv8pmu_read_evcntr(int idx)
>  {
>         u32 counter = ARMV8_IDX_TO_COUNTER(idx);
> -       write_sysreg(counter, pmselr_el0);
> -       isb();
> -}
>  
> -static inline u64 armv8pmu_read_evcntr(int idx)
> -{
> -       armv8pmu_select_counter(idx);
> -       return read_sysreg(pmxevcntr_el0);
> +       return read_pmevcntrn(counter);
>  }
>  
>  static inline u64 armv8pmu_read_hw_counter(struct perf_event *event)
> @@ -433,8 +494,9 @@ static u64 armv8pmu_read_counter(struct perf_event *event)
>  
>  static inline void armv8pmu_write_evcntr(int idx, u64 value)
>  {
> -       armv8pmu_select_counter(idx);
> -       write_sysreg(value, pmxevcntr_el0);
> +       u32 counter = ARMV8_IDX_TO_COUNTER(idx);

Might be a good idea to make ARMV8_IDX_TO_COUNTER a static inline
function that has a return type of u32. I had to go check the code to
make sure it wasn't something larger.

> +
> +       write_pmevcntrn(counter, value);
>  }
>  
>  static inline void armv8pmu_write_hw_counter(struct perf_event *event,
> @@ -469,9 +531,10 @@ static void armv8pmu_write_counter(struct perf_event *event, u64 value)
>  
>  static inline void armv8pmu_write_evtype(int idx, u32 val)
>  {
> -       armv8pmu_select_counter(idx);
> +       u32 counter = ARMV8_IDX_TO_COUNTER(idx);
> +
>         val &= ARMV8_PMU_EVTYPE_MASK;
> -       write_sysreg(val, pmxevtyper_el0);
> +       write_pmevtypern(counter, val);
>  }
>  
>  static inline void armv8pmu_write_event_type(struct perf_event *event)
> @@ -491,7 +554,10 @@ static inline void armv8pmu_write_event_type(struct perf_event *event)
>                 armv8pmu_write_evtype(idx - 1, hwc->config_base);
>                 armv8pmu_write_evtype(idx, chain_evt);
>         } else {
> -               armv8pmu_write_evtype(idx, hwc->config_base);
> +               if (idx == ARMV8_IDX_CYCLE_COUNTER)
> +                       write_sysreg(hwc->config_base, pmccfiltr_el0);
> +               else
> +                       armv8pmu_write_evtype(idx, hwc->config_base);
>         }
>  }
>  
> @@ -595,9 +661,10 @@ static void armv8pmu_enable_event(struct perf_event *event)
>          * Disable counter
>          */
>         armv8pmu_disable_event_counter(event);
> +       isb();

Same comment about uncommented isb().

>  
>         /*
> -        * Set event (if destined for PMNx counters).
> +        * Set event.
>          */
>         armv8pmu_write_event_type(event);
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 3/7] arm64: perf: Remove PMU locking
  2020-06-17 11:38   ` Alexandru Elisei
@ 2020-06-17 20:17     ` Stephen Boyd
  -1 siblings, 0 replies; 45+ messages in thread
From: Stephen Boyd @ 2020-06-17 20:17 UTC (permalink / raw)
  To: Alexandru Elisei, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, Peter Zijlstra, maz, Jiri Olsa,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, will

Quoting Alexandru Elisei (2020-06-17 04:38:47)
> From: Julien Thierry <julien.thierry@arm.com>
> 
> The PMU is disabled and enabled, and the counters are programmed from
> contexts where interrupts or preemption is disabled.
> 
> The functions to toggle the PMU and to program the PMU counters access the
> registers directly and don't access data modified by the interrupt handler.
> That, and the fact that they're always called from non-preemptible
> contexts, means that we don't need to disable interrupts or use a spinlock.

Maybe we should add a lockdep assertion that the code isn't preemptible?
I.e. add a cant_sleep() call? Or is it more that we don't need locking
because we're just doing register accesses and don't need to protect
those accesses from each other?

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 3/7] arm64: perf: Remove PMU locking
@ 2020-06-17 20:17     ` Stephen Boyd
  0 siblings, 0 replies; 45+ messages in thread
From: Stephen Boyd @ 2020-06-17 20:17 UTC (permalink / raw)
  To: Alexandru Elisei, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, will, Julien Thierry, Peter Zijlstra, maz,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, Jiri Olsa

Quoting Alexandru Elisei (2020-06-17 04:38:47)
> From: Julien Thierry <julien.thierry@arm.com>
> 
> The PMU is disabled and enabled, and the counters are programmed from
> contexts where interrupts or preemption is disabled.
> 
> The functions to toggle the PMU and to program the PMU counters access the
> registers directly and don't access data modified by the interrupt handler.
> That, and the fact that they're always called from non-preemptible
> contexts, means that we don't need to disable interrupts or use a spinlock.

Maybe we should add a lockdep assertion that the code isn't preemptible?
I.e. add a cant_sleep() call? Or is it more that we don't need locking
because we're just doing register accesses and don't need to protect
those accesses from each other?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 6/7] arm_pmu: Introduce pmu_irq_ops
  2020-06-17 11:38   ` Alexandru Elisei
@ 2020-06-17 20:23     ` Stephen Boyd
  -1 siblings, 0 replies; 45+ messages in thread
From: Stephen Boyd @ 2020-06-17 20:23 UTC (permalink / raw)
  To: Alexandru Elisei, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, maz, Will Deacon, catalin.marinas,
	will, Julien Thierry

Quoting Alexandru Elisei (2020-06-17 04:38:50)
> diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
> index df352b334ea7..17e5952d21e4 100644
> --- a/drivers/perf/arm_pmu.c
> +++ b/drivers/perf/arm_pmu.c
> @@ -26,8 +26,46 @@
>  
>  #include <asm/irq_regs.h>
>  
> +static int armpmu_count_irq_users(const int irq);
> +
> +struct pmu_irq_ops {
> +       void (*enable_pmuirq)(unsigned int irq);
> +       void (*disable_pmuirq)(unsigned int irq);
> +       void (*free_pmuirq)(unsigned int irq, int cpu, void __percpu *devid);

Does 'cpu' need to be signed?

> +};
> +
> +static void armpmu_free_pmuirq(unsigned int irq, int cpu, void __percpu *devid)
> +{
> +       free_irq(irq, per_cpu_ptr(devid, cpu));
> +}
> +
> +static const struct pmu_irq_ops pmuirq_ops = {
> +       .enable_pmuirq = enable_irq,
> +       .disable_pmuirq = disable_irq_nosync,
> +       .free_pmuirq = armpmu_free_pmuirq
> +};
> +
> +static void armpmu_enable_percpu_pmuirq(unsigned int irq)
> +{
> +       enable_percpu_irq(irq, IRQ_TYPE_NONE);
> +}
> +
> +static void armpmu_free_percpu_pmuirq(unsigned int irq, int cpu,
> +                                  void __percpu *devid)
> +{
> +       if (armpmu_count_irq_users(irq) == 1)
> +               free_percpu_irq(irq, devid);
> +}
> +
> +static const struct pmu_irq_ops percpu_pmuirq_ops = {
> +       .enable_pmuirq = armpmu_enable_percpu_pmuirq,
> +       .disable_pmuirq = disable_percpu_irq,
> +       .free_pmuirq = armpmu_free_percpu_pmuirq
> +};
> +
>  static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
>  static DEFINE_PER_CPU(int, cpu_irq);

Same question as above.

> +static DEFINE_PER_CPU(const struct pmu_irq_ops *, cpu_irq_ops);
>  
>  static inline u64 arm_pmu_event_max_period(struct perf_event *event)
>  {

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 6/7] arm_pmu: Introduce pmu_irq_ops
@ 2020-06-17 20:23     ` Stephen Boyd
  0 siblings, 0 replies; 45+ messages in thread
From: Stephen Boyd @ 2020-06-17 20:23 UTC (permalink / raw)
  To: Alexandru Elisei, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, maz, Will Deacon, catalin.marinas,
	will, Julien Thierry

Quoting Alexandru Elisei (2020-06-17 04:38:50)
> diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
> index df352b334ea7..17e5952d21e4 100644
> --- a/drivers/perf/arm_pmu.c
> +++ b/drivers/perf/arm_pmu.c
> @@ -26,8 +26,46 @@
>  
>  #include <asm/irq_regs.h>
>  
> +static int armpmu_count_irq_users(const int irq);
> +
> +struct pmu_irq_ops {
> +       void (*enable_pmuirq)(unsigned int irq);
> +       void (*disable_pmuirq)(unsigned int irq);
> +       void (*free_pmuirq)(unsigned int irq, int cpu, void __percpu *devid);

Does 'cpu' need to be signed?

> +};
> +
> +static void armpmu_free_pmuirq(unsigned int irq, int cpu, void __percpu *devid)
> +{
> +       free_irq(irq, per_cpu_ptr(devid, cpu));
> +}
> +
> +static const struct pmu_irq_ops pmuirq_ops = {
> +       .enable_pmuirq = enable_irq,
> +       .disable_pmuirq = disable_irq_nosync,
> +       .free_pmuirq = armpmu_free_pmuirq
> +};
> +
> +static void armpmu_enable_percpu_pmuirq(unsigned int irq)
> +{
> +       enable_percpu_irq(irq, IRQ_TYPE_NONE);
> +}
> +
> +static void armpmu_free_percpu_pmuirq(unsigned int irq, int cpu,
> +                                  void __percpu *devid)
> +{
> +       if (armpmu_count_irq_users(irq) == 1)
> +               free_percpu_irq(irq, devid);
> +}
> +
> +static const struct pmu_irq_ops percpu_pmuirq_ops = {
> +       .enable_pmuirq = armpmu_enable_percpu_pmuirq,
> +       .disable_pmuirq = disable_percpu_irq,
> +       .free_pmuirq = armpmu_free_percpu_pmuirq
> +};
> +
>  static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
>  static DEFINE_PER_CPU(int, cpu_irq);

Same question as above.

> +static DEFINE_PER_CPU(const struct pmu_irq_ops *, cpu_irq_ops);
>  
>  static inline u64 arm_pmu_event_max_period(struct perf_event *event)
>  {

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_event()
  2020-06-17 20:01     ` Stephen Boyd
@ 2020-06-18 10:50       ` Alexandru Elisei
  -1 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-18 10:50 UTC (permalink / raw)
  To: Stephen Boyd, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, Peter Zijlstra, maz, Jiri Olsa,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, will, Julien Thierry

Hi Stephen,

Thank you very much for taking the time to review the patches!

Comments below.

On 6/17/20 9:01 PM, Stephen Boyd wrote:
> Quoting Alexandru Elisei (2020-06-17 04:38:45)
>> Writes to the PMXEVTYPER_EL0 register are not self-synchronising. In
>> armv8pmu_enable_event(), the PE can reorder configuring the event type
>> after we have enabled the counter and the interrupt. This can lead to an
>> interrupt being asserted because the of the previous event type that we
> 'because the of the' doesn't read properly.

Typo on my part, will fix it.

>
>> were counting, not the one that we've just enabled.
>>
>> The same rationale applies to writes to the PMINTENSET_EL1 register. The PE
>> can reorder enabling the interrupt at any point in the future after we have
>> enabled the event.
>>
>> Prevent both situations from happening by adding an ISB just before we
>> enable the event counter.
>>
>> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
>> index 4d7879484cec..ee180b2a5b39 100644
>> --- a/arch/arm64/kernel/perf_event.c
>> +++ b/arch/arm64/kernel/perf_event.c
>> @@ -605,6 +605,7 @@ static void armv8pmu_enable_event(struct perf_event *event)
>>          * Enable interrupt for this counter
>>          */
>>         armv8pmu_enable_event_irq(event);
>> +       isb();
> Please add a comment before the isb() explaining the situation. Nobody
> knows what this is for when reading the code and they don't want to do
> git archaeology to figure it out.

That's a good idea, I'll do that.

Thanks,
Alex

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_event()
@ 2020-06-18 10:50       ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-18 10:50 UTC (permalink / raw)
  To: Stephen Boyd, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, will, Julien Thierry, Peter Zijlstra, maz,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, Jiri Olsa,
	Julien Thierry

Hi Stephen,

Thank you very much for taking the time to review the patches!

Comments below.

On 6/17/20 9:01 PM, Stephen Boyd wrote:
> Quoting Alexandru Elisei (2020-06-17 04:38:45)
>> Writes to the PMXEVTYPER_EL0 register are not self-synchronising. In
>> armv8pmu_enable_event(), the PE can reorder configuring the event type
>> after we have enabled the counter and the interrupt. This can lead to an
>> interrupt being asserted because the of the previous event type that we
> 'because the of the' doesn't read properly.

Typo on my part, will fix it.

>
>> were counting, not the one that we've just enabled.
>>
>> The same rationale applies to writes to the PMINTENSET_EL1 register. The PE
>> can reorder enabling the interrupt at any point in the future after we have
>> enabled the event.
>>
>> Prevent both situations from happening by adding an ISB just before we
>> enable the event counter.
>>
>> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
>> index 4d7879484cec..ee180b2a5b39 100644
>> --- a/arch/arm64/kernel/perf_event.c
>> +++ b/arch/arm64/kernel/perf_event.c
>> @@ -605,6 +605,7 @@ static void armv8pmu_enable_event(struct perf_event *event)
>>          * Enable interrupt for this counter
>>          */
>>         armv8pmu_enable_event_irq(event);
>> +       isb();
> Please add a comment before the isb() explaining the situation. Nobody
> knows what this is for when reading the code and they don't want to do
> git archaeology to figure it out.

That's a good idea, I'll do that.

Thanks,
Alex

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 2/7] arm64: perf: Avoid PMXEV* indirection
  2020-06-17 20:11     ` Stephen Boyd
@ 2020-06-18 10:51       ` Alexandru Elisei
  -1 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-18 10:51 UTC (permalink / raw)
  To: Stephen Boyd, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, Peter Zijlstra, maz, Jiri Olsa,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, will, Julien Thierry

Hello,

On 6/17/20 9:11 PM, Stephen Boyd wrote:
> Quoting Alexandru Elisei (2020-06-17 04:38:46)
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> Currently we access the counter registers and their respective type
>> registers indirectly. This requires us to write to PMSELR, issue an ISB,
>> then access the relevant PMXEV* registers.
>>
>> This is unfortunate, because:
>>
>> * Under virtualization, accessing one registers requires two traps to
> one register? Not plural presumably.

That's another typo, will fix it.

>
>>   the hypervisor, even though we could access the register directly with
>>   a single trap.
>>
>> * We have to issue an ISB which we could otherwise avoid the cost of.
>>
>> * When we use NMIs, the NMI handler will have to save/restore the select
>>   register in case the code it preempted was attempting to access a
>>   counter or its type register.
>>
>> We can avoid these issues by directly accessing the relevant registers.
>> This patch adds helpers to do so.
>>
>> In armv8pmu_enable_event() we still need the ISB to prevent the PE from
>> reordering the write to PMINTENSET_EL1 register. If the interrupt is
>> enabled before we disable the counter and the new event is configured,
>> we might get an interrupt triggered by the previously programmed event
>> overflowing, but which we wrongly attribute to the event that we are
>> enabling.
>>
>> In the process, remove the comment that refers to the ARMv7 PMU.
>>
>> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
>> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
>> Cc: Jiri Olsa <jolsa@redhat.com>
>> Cc: Namhyung Kim <namhyung@kernel.org>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>> [Julien T.: Don't inline read/write functions to avoid big code-size
>>         increase, remove unused read_pmevtypern function,
>>         fix counter index issue.]
>> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
>> [Removed comment, removed trailing semicolons in macros, added ISB]
>> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
>> ---
>>  arch/arm64/kernel/perf_event.c | 95 +++++++++++++++++++++++++++++-----
>>  1 file changed, 81 insertions(+), 14 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
>> index ee180b2a5b39..e95b5ca70a53 100644
>> --- a/arch/arm64/kernel/perf_event.c
>> +++ b/arch/arm64/kernel/perf_event.c
>> @@ -323,6 +323,73 @@ static inline bool armv8pmu_event_is_chained(struct perf_event *event)
>>  #define        ARMV8_IDX_TO_COUNTER(x) \
>>         (((x) - ARMV8_IDX_COUNTER0) & ARMV8_PMU_COUNTER_MASK)
>>  
>> +/*
>> + * This code is really good
>> + */
> Superb!

Exactly! I thought so too, that's why I kept the comment.

>
>> +
>> +#define PMEVN_CASE(n, case_macro) \
>> +       case n: case_macro(n); break
>> +
>> +#define PMEVN_SWITCH(x, case_macro)                            \
>> +       do {                                                    \
>> +               switch (x) {                                    \
>> +               PMEVN_CASE(0,  case_macro);                     \
>> +               PMEVN_CASE(1,  case_macro);                     \
>> +               PMEVN_CASE(2,  case_macro);                     \
>> +               PMEVN_CASE(3,  case_macro);                     \
>> +               PMEVN_CASE(4,  case_macro);                     \
>> +               PMEVN_CASE(5,  case_macro);                     \
>> +               PMEVN_CASE(6,  case_macro);                     \
>> +               PMEVN_CASE(7,  case_macro);                     \
>> +               PMEVN_CASE(8,  case_macro);                     \
>> +               PMEVN_CASE(9,  case_macro);                     \
>> +               PMEVN_CASE(10, case_macro);                     \
>> +               PMEVN_CASE(11, case_macro);                     \
>> +               PMEVN_CASE(12, case_macro);                     \
>> +               PMEVN_CASE(13, case_macro);                     \
>> +               PMEVN_CASE(14, case_macro);                     \
>> +               PMEVN_CASE(15, case_macro);                     \
>> +               PMEVN_CASE(16, case_macro);                     \
>> +               PMEVN_CASE(17, case_macro);                     \
>> +               PMEVN_CASE(18, case_macro);                     \
>> +               PMEVN_CASE(19, case_macro);                     \
>> +               PMEVN_CASE(20, case_macro);                     \
>> +               PMEVN_CASE(21, case_macro);                     \
>> +               PMEVN_CASE(22, case_macro);                     \
>> +               PMEVN_CASE(23, case_macro);                     \
>> +               PMEVN_CASE(24, case_macro);                     \
>> +               PMEVN_CASE(25, case_macro);                     \
>> +               PMEVN_CASE(26, case_macro);                     \
>> +               PMEVN_CASE(27, case_macro);                     \
>> +               PMEVN_CASE(28, case_macro);                     \
>> +               PMEVN_CASE(29, case_macro);                     \
>> +               PMEVN_CASE(30, case_macro);                     \
>> +               default: WARN(1, "Invalid PMEV* index");        \
> Missing newline on that WARN message?

Indeed, will add it.

>
>> +               }                                               \
>> +       } while (0)
>> +
>> +#define RETURN_READ_PMEVCNTRN(n) \
>> +       return read_sysreg(pmevcntr##n##_el0)
>> +static unsigned long read_pmevcntrn(int n)
>> +{
>> +       PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
>> +       return 0;
>> +}
>> +
>> +#define WRITE_PMEVCNTRN(n) \
>> +       write_sysreg(val, pmevcntr##n##_el0)
>> +static void write_pmevcntrn(int n, unsigned long val)
>> +{
>> +       PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
>> +}
>> +
>> +#define WRITE_PMEVTYPERN(n) \
>> +       write_sysreg(val, pmevtyper##n##_el0)
>> +static void write_pmevtypern(int n, unsigned long val)
>> +{
>> +       PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
>> +}
>> +
>>  static inline u32 armv8pmu_pmcr_read(void)
>>  {
>>         return read_sysreg(pmcr_el0);
>> @@ -351,17 +418,11 @@ static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx)
>>         return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx));
>>  }
>>  
>> -static inline void armv8pmu_select_counter(int idx)
>> +static inline u32 armv8pmu_read_evcntr(int idx)
>>  {
>>         u32 counter = ARMV8_IDX_TO_COUNTER(idx);
>> -       write_sysreg(counter, pmselr_el0);
>> -       isb();
>> -}
>>  
>> -static inline u64 armv8pmu_read_evcntr(int idx)
>> -{
>> -       armv8pmu_select_counter(idx);
>> -       return read_sysreg(pmxevcntr_el0);
>> +       return read_pmevcntrn(counter);
>>  }
>>  
>>  static inline u64 armv8pmu_read_hw_counter(struct perf_event *event)
>> @@ -433,8 +494,9 @@ static u64 armv8pmu_read_counter(struct perf_event *event)
>>  
>>  static inline void armv8pmu_write_evcntr(int idx, u64 value)
>>  {
>> -       armv8pmu_select_counter(idx);
>> -       write_sysreg(value, pmxevcntr_el0);
>> +       u32 counter = ARMV8_IDX_TO_COUNTER(idx);
> Might be a good idea to make ARMV8_IDX_TO_COUNTER a static inline
> function that has a return type of u32. I had to go check the code to
> make sure it wasn't something larger.

Architecturally, there are at most 32 counter registers, which would fit in an s8,
so I don't think type checking would really help us here.

>
>> +
>> +       write_pmevcntrn(counter, value);
>>  }
>>  
>>  static inline void armv8pmu_write_hw_counter(struct perf_event *event,
>> @@ -469,9 +531,10 @@ static void armv8pmu_write_counter(struct perf_event *event, u64 value)
>>  
>>  static inline void armv8pmu_write_evtype(int idx, u32 val)
>>  {
>> -       armv8pmu_select_counter(idx);
>> +       u32 counter = ARMV8_IDX_TO_COUNTER(idx);
>> +
>>         val &= ARMV8_PMU_EVTYPE_MASK;
>> -       write_sysreg(val, pmxevtyper_el0);
>> +       write_pmevtypern(counter, val);
>>  }
>>  
>>  static inline void armv8pmu_write_event_type(struct perf_event *event)
>> @@ -491,7 +554,10 @@ static inline void armv8pmu_write_event_type(struct perf_event *event)
>>                 armv8pmu_write_evtype(idx - 1, hwc->config_base);
>>                 armv8pmu_write_evtype(idx, chain_evt);
>>         } else {
>> -               armv8pmu_write_evtype(idx, hwc->config_base);
>> +               if (idx == ARMV8_IDX_CYCLE_COUNTER)
>> +                       write_sysreg(hwc->config_base, pmccfiltr_el0);
>> +               else
>> +                       armv8pmu_write_evtype(idx, hwc->config_base);
>>         }
>>  }
>>  
>> @@ -595,9 +661,10 @@ static void armv8pmu_enable_event(struct perf_event *event)
>>          * Disable counter
>>          */
>>         armv8pmu_disable_event_counter(event);
>> +       isb();
> Same comment about uncommented isb().

Will add a comment explaining the ISB.

Thanks,
Alex

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 2/7] arm64: perf: Avoid PMXEV* indirection
@ 2020-06-18 10:51       ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-18 10:51 UTC (permalink / raw)
  To: Stephen Boyd, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, will, Julien Thierry, Peter Zijlstra, maz,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, Jiri Olsa,
	Julien Thierry

Hello,

On 6/17/20 9:11 PM, Stephen Boyd wrote:
> Quoting Alexandru Elisei (2020-06-17 04:38:46)
>> From: Mark Rutland <mark.rutland@arm.com>
>>
>> Currently we access the counter registers and their respective type
>> registers indirectly. This requires us to write to PMSELR, issue an ISB,
>> then access the relevant PMXEV* registers.
>>
>> This is unfortunate, because:
>>
>> * Under virtualization, accessing one registers requires two traps to
> one register? Not plural presumably.

That's another typo, will fix it.

>
>>   the hypervisor, even though we could access the register directly with
>>   a single trap.
>>
>> * We have to issue an ISB which we could otherwise avoid the cost of.
>>
>> * When we use NMIs, the NMI handler will have to save/restore the select
>>   register in case the code it preempted was attempting to access a
>>   counter or its type register.
>>
>> We can avoid these issues by directly accessing the relevant registers.
>> This patch adds helpers to do so.
>>
>> In armv8pmu_enable_event() we still need the ISB to prevent the PE from
>> reordering the write to PMINTENSET_EL1 register. If the interrupt is
>> enabled before we disable the counter and the new event is configured,
>> we might get an interrupt triggered by the previously programmed event
>> overflowing, but which we wrongly attribute to the event that we are
>> enabling.
>>
>> In the process, remove the comment that refers to the ARMv7 PMU.
>>
>> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
>> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
>> Cc: Jiri Olsa <jolsa@redhat.com>
>> Cc: Namhyung Kim <namhyung@kernel.org>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>> [Julien T.: Don't inline read/write functions to avoid big code-size
>>         increase, remove unused read_pmevtypern function,
>>         fix counter index issue.]
>> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
>> [Removed comment, removed trailing semicolons in macros, added ISB]
>> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
>> ---
>>  arch/arm64/kernel/perf_event.c | 95 +++++++++++++++++++++++++++++-----
>>  1 file changed, 81 insertions(+), 14 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
>> index ee180b2a5b39..e95b5ca70a53 100644
>> --- a/arch/arm64/kernel/perf_event.c
>> +++ b/arch/arm64/kernel/perf_event.c
>> @@ -323,6 +323,73 @@ static inline bool armv8pmu_event_is_chained(struct perf_event *event)
>>  #define        ARMV8_IDX_TO_COUNTER(x) \
>>         (((x) - ARMV8_IDX_COUNTER0) & ARMV8_PMU_COUNTER_MASK)
>>  
>> +/*
>> + * This code is really good
>> + */
> Superb!

Exactly! I thought so too, that's why I kept the comment.

>
>> +
>> +#define PMEVN_CASE(n, case_macro) \
>> +       case n: case_macro(n); break
>> +
>> +#define PMEVN_SWITCH(x, case_macro)                            \
>> +       do {                                                    \
>> +               switch (x) {                                    \
>> +               PMEVN_CASE(0,  case_macro);                     \
>> +               PMEVN_CASE(1,  case_macro);                     \
>> +               PMEVN_CASE(2,  case_macro);                     \
>> +               PMEVN_CASE(3,  case_macro);                     \
>> +               PMEVN_CASE(4,  case_macro);                     \
>> +               PMEVN_CASE(5,  case_macro);                     \
>> +               PMEVN_CASE(6,  case_macro);                     \
>> +               PMEVN_CASE(7,  case_macro);                     \
>> +               PMEVN_CASE(8,  case_macro);                     \
>> +               PMEVN_CASE(9,  case_macro);                     \
>> +               PMEVN_CASE(10, case_macro);                     \
>> +               PMEVN_CASE(11, case_macro);                     \
>> +               PMEVN_CASE(12, case_macro);                     \
>> +               PMEVN_CASE(13, case_macro);                     \
>> +               PMEVN_CASE(14, case_macro);                     \
>> +               PMEVN_CASE(15, case_macro);                     \
>> +               PMEVN_CASE(16, case_macro);                     \
>> +               PMEVN_CASE(17, case_macro);                     \
>> +               PMEVN_CASE(18, case_macro);                     \
>> +               PMEVN_CASE(19, case_macro);                     \
>> +               PMEVN_CASE(20, case_macro);                     \
>> +               PMEVN_CASE(21, case_macro);                     \
>> +               PMEVN_CASE(22, case_macro);                     \
>> +               PMEVN_CASE(23, case_macro);                     \
>> +               PMEVN_CASE(24, case_macro);                     \
>> +               PMEVN_CASE(25, case_macro);                     \
>> +               PMEVN_CASE(26, case_macro);                     \
>> +               PMEVN_CASE(27, case_macro);                     \
>> +               PMEVN_CASE(28, case_macro);                     \
>> +               PMEVN_CASE(29, case_macro);                     \
>> +               PMEVN_CASE(30, case_macro);                     \
>> +               default: WARN(1, "Invalid PMEV* index");        \
> Missing newline on that WARN message?

Indeed, will add it.

>
>> +               }                                               \
>> +       } while (0)
>> +
>> +#define RETURN_READ_PMEVCNTRN(n) \
>> +       return read_sysreg(pmevcntr##n##_el0)
>> +static unsigned long read_pmevcntrn(int n)
>> +{
>> +       PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
>> +       return 0;
>> +}
>> +
>> +#define WRITE_PMEVCNTRN(n) \
>> +       write_sysreg(val, pmevcntr##n##_el0)
>> +static void write_pmevcntrn(int n, unsigned long val)
>> +{
>> +       PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
>> +}
>> +
>> +#define WRITE_PMEVTYPERN(n) \
>> +       write_sysreg(val, pmevtyper##n##_el0)
>> +static void write_pmevtypern(int n, unsigned long val)
>> +{
>> +       PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
>> +}
>> +
>>  static inline u32 armv8pmu_pmcr_read(void)
>>  {
>>         return read_sysreg(pmcr_el0);
>> @@ -351,17 +418,11 @@ static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx)
>>         return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx));
>>  }
>>  
>> -static inline void armv8pmu_select_counter(int idx)
>> +static inline u32 armv8pmu_read_evcntr(int idx)
>>  {
>>         u32 counter = ARMV8_IDX_TO_COUNTER(idx);
>> -       write_sysreg(counter, pmselr_el0);
>> -       isb();
>> -}
>>  
>> -static inline u64 armv8pmu_read_evcntr(int idx)
>> -{
>> -       armv8pmu_select_counter(idx);
>> -       return read_sysreg(pmxevcntr_el0);
>> +       return read_pmevcntrn(counter);
>>  }
>>  
>>  static inline u64 armv8pmu_read_hw_counter(struct perf_event *event)
>> @@ -433,8 +494,9 @@ static u64 armv8pmu_read_counter(struct perf_event *event)
>>  
>>  static inline void armv8pmu_write_evcntr(int idx, u64 value)
>>  {
>> -       armv8pmu_select_counter(idx);
>> -       write_sysreg(value, pmxevcntr_el0);
>> +       u32 counter = ARMV8_IDX_TO_COUNTER(idx);
> Might be a good idea to make ARMV8_IDX_TO_COUNTER a static inline
> function that has a return type of u32. I had to go check the code to
> make sure it wasn't something larger.

Architecturally, there are at most 32 counter registers, which would fit in an s8,
so I don't think type checking would really help us here.

>
>> +
>> +       write_pmevcntrn(counter, value);
>>  }
>>  
>>  static inline void armv8pmu_write_hw_counter(struct perf_event *event,
>> @@ -469,9 +531,10 @@ static void armv8pmu_write_counter(struct perf_event *event, u64 value)
>>  
>>  static inline void armv8pmu_write_evtype(int idx, u32 val)
>>  {
>> -       armv8pmu_select_counter(idx);
>> +       u32 counter = ARMV8_IDX_TO_COUNTER(idx);
>> +
>>         val &= ARMV8_PMU_EVTYPE_MASK;
>> -       write_sysreg(val, pmxevtyper_el0);
>> +       write_pmevtypern(counter, val);
>>  }
>>  
>>  static inline void armv8pmu_write_event_type(struct perf_event *event)
>> @@ -491,7 +554,10 @@ static inline void armv8pmu_write_event_type(struct perf_event *event)
>>                 armv8pmu_write_evtype(idx - 1, hwc->config_base);
>>                 armv8pmu_write_evtype(idx, chain_evt);
>>         } else {
>> -               armv8pmu_write_evtype(idx, hwc->config_base);
>> +               if (idx == ARMV8_IDX_CYCLE_COUNTER)
>> +                       write_sysreg(hwc->config_base, pmccfiltr_el0);
>> +               else
>> +                       armv8pmu_write_evtype(idx, hwc->config_base);
>>         }
>>  }
>>  
>> @@ -595,9 +661,10 @@ static void armv8pmu_enable_event(struct perf_event *event)
>>          * Disable counter
>>          */
>>         armv8pmu_disable_event_counter(event);
>> +       isb();
> Same comment about uncommented isb().

Will add a comment explaining the ISB.

Thanks,
Alex

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 6/7] arm_pmu: Introduce pmu_irq_ops
  2020-06-17 20:23     ` Stephen Boyd
@ 2020-06-18 10:51       ` Alexandru Elisei
  -1 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-18 10:51 UTC (permalink / raw)
  To: Stephen Boyd, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, maz, Will Deacon, catalin.marinas,
	will, Julien Thierry

Hi,

On 6/17/20 9:23 PM, Stephen Boyd wrote:
> Quoting Alexandru Elisei (2020-06-17 04:38:50)
>> diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
>> index df352b334ea7..17e5952d21e4 100644
>> --- a/drivers/perf/arm_pmu.c
>> +++ b/drivers/perf/arm_pmu.c
>> @@ -26,8 +26,46 @@
>>  
>>  #include <asm/irq_regs.h>
>>  
>> +static int armpmu_count_irq_users(const int irq);
>> +
>> +struct pmu_irq_ops {
>> +       void (*enable_pmuirq)(unsigned int irq);
>> +       void (*disable_pmuirq)(unsigned int irq);
>> +       void (*free_pmuirq)(unsigned int irq, int cpu, void __percpu *devid);
> Does 'cpu' need to be signed?

I'm not sure what you mean. The cpu argument comes from
drivers/perf/arm_pmu_platform.c::armpmu_free_irqs -> arpmu_free_irq, where is the
iterator variable used by the macro for_each_cpu. The documentation for the macro
states:

/**
* for_each_cpu - iterate over every cpu in a mask
* @cpu: the (optionally unsigned) integer iterator ^^^^^^^^^^^^^^^^^^^

I could write a patch to convert to an unsigned int, but it seems like unnecessary
churn to me.

>> +};
>> +
>> +static void armpmu_free_pmuirq(unsigned int irq, int cpu, void __percpu *devid)
>> +{
>> +       free_irq(irq, per_cpu_ptr(devid, cpu));
>> +}
>> +
>> +static const struct pmu_irq_ops pmuirq_ops = {
>> +       .enable_pmuirq = enable_irq,
>> +       .disable_pmuirq = disable_irq_nosync,
>> +       .free_pmuirq = armpmu_free_pmuirq
>> +};
>> +
>> +static void armpmu_enable_percpu_pmuirq(unsigned int irq)
>> +{
>> +       enable_percpu_irq(irq, IRQ_TYPE_NONE);
>> +}
>> +
>> +static void armpmu_free_percpu_pmuirq(unsigned int irq, int cpu,
>> +                                  void __percpu *devid)
>> +{
>> +       if (armpmu_count_irq_users(irq) == 1)
>> +               free_percpu_irq(irq, devid);
>> +}
>> +
>> +static const struct pmu_irq_ops percpu_pmuirq_ops = {
>> +       .enable_pmuirq = armpmu_enable_percpu_pmuirq,
>> +       .disable_pmuirq = disable_percpu_irq,
>> +       .free_pmuirq = armpmu_free_percpu_pmuirq
>> +};
>> +
>>  static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
>>  static DEFINE_PER_CPU(int, cpu_irq);
> Same question as above.

Same situation as above - cpu is the iterator variable for for_each_cpu.

Thanks,
Alex

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 6/7] arm_pmu: Introduce pmu_irq_ops
@ 2020-06-18 10:51       ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-18 10:51 UTC (permalink / raw)
  To: Stephen Boyd, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, maz, Will Deacon, catalin.marinas,
	will, Julien Thierry

Hi,

On 6/17/20 9:23 PM, Stephen Boyd wrote:
> Quoting Alexandru Elisei (2020-06-17 04:38:50)
>> diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
>> index df352b334ea7..17e5952d21e4 100644
>> --- a/drivers/perf/arm_pmu.c
>> +++ b/drivers/perf/arm_pmu.c
>> @@ -26,8 +26,46 @@
>>  
>>  #include <asm/irq_regs.h>
>>  
>> +static int armpmu_count_irq_users(const int irq);
>> +
>> +struct pmu_irq_ops {
>> +       void (*enable_pmuirq)(unsigned int irq);
>> +       void (*disable_pmuirq)(unsigned int irq);
>> +       void (*free_pmuirq)(unsigned int irq, int cpu, void __percpu *devid);
> Does 'cpu' need to be signed?

I'm not sure what you mean. The cpu argument comes from
drivers/perf/arm_pmu_platform.c::armpmu_free_irqs -> arpmu_free_irq, where is the
iterator variable used by the macro for_each_cpu. The documentation for the macro
states:

/**
* for_each_cpu - iterate over every cpu in a mask
* @cpu: the (optionally unsigned) integer iterator ^^^^^^^^^^^^^^^^^^^

I could write a patch to convert to an unsigned int, but it seems like unnecessary
churn to me.

>> +};
>> +
>> +static void armpmu_free_pmuirq(unsigned int irq, int cpu, void __percpu *devid)
>> +{
>> +       free_irq(irq, per_cpu_ptr(devid, cpu));
>> +}
>> +
>> +static const struct pmu_irq_ops pmuirq_ops = {
>> +       .enable_pmuirq = enable_irq,
>> +       .disable_pmuirq = disable_irq_nosync,
>> +       .free_pmuirq = armpmu_free_pmuirq
>> +};
>> +
>> +static void armpmu_enable_percpu_pmuirq(unsigned int irq)
>> +{
>> +       enable_percpu_irq(irq, IRQ_TYPE_NONE);
>> +}
>> +
>> +static void armpmu_free_percpu_pmuirq(unsigned int irq, int cpu,
>> +                                  void __percpu *devid)
>> +{
>> +       if (armpmu_count_irq_users(irq) == 1)
>> +               free_percpu_irq(irq, devid);
>> +}
>> +
>> +static const struct pmu_irq_ops percpu_pmuirq_ops = {
>> +       .enable_pmuirq = armpmu_enable_percpu_pmuirq,
>> +       .disable_pmuirq = disable_percpu_irq,
>> +       .free_pmuirq = armpmu_free_percpu_pmuirq
>> +};
>> +
>>  static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu);
>>  static DEFINE_PER_CPU(int, cpu_irq);
> Same question as above.

Same situation as above - cpu is the iterator variable for for_each_cpu.

Thanks,
Alex

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 3/7] arm64: perf: Remove PMU locking
  2020-06-17 20:17     ` Stephen Boyd
@ 2020-06-18 10:51       ` Alexandru Elisei
  -1 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-18 10:51 UTC (permalink / raw)
  To: Stephen Boyd, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, will, Julien Thierry, Peter Zijlstra, maz,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, Jiri Olsa

Hi,

On 6/17/20 9:17 PM, Stephen Boyd wrote:
> Quoting Alexandru Elisei (2020-06-17 04:38:47)
>> From: Julien Thierry <julien.thierry@arm.com>
>>
>> The PMU is disabled and enabled, and the counters are programmed from
>> contexts where interrupts or preemption is disabled.
>>
>> The functions to toggle the PMU and to program the PMU counters access the
>> registers directly and don't access data modified by the interrupt handler.
>> That, and the fact that they're always called from non-preemptible
>> contexts, means that we don't need to disable interrupts or use a spinlock.
> Maybe we should add a lockdep assertion that the code isn't preemptible?
> I.e. add a cant_sleep() call? Or is it more that we don't need locking
> because we're just doing register accesses and don't need to protect
> those accesses from each other?

It's both. The spinlocks were there to protect the functions from being preempted
and possibly migrated to another CPU, and from being interrupted by the PMU irq
handler.

There was no data race with the interrupt handler, but before the previous patch
("arm64: perf: Avoid PMXEV* indirection"), in order to read/write/program a
counter, one had to write the counter number to a counter selection register, and
then write/read the desired value from another register. This was done from both
the armv8pmu_{enable,disable}_event() functions and the irq handler, and the
spinlock was necessary. Now that we can access a counter using a single register
access, there's no need to protect the functions from being interrupted by the IRQ
handler. As for armv8pmu_{start,stop}(), they consist of one register write, so
it's also safe for the irq handler to interrupt them.

For the preemption part of the locking. The armv8pmu_{enable,disable}_event(),
when called by the perf core code via the pmu->{start,stop,add,del} callbacks, are
guaranteed to be called with IRQs and preemption disabled, as per the comment in
include/linux/perf_event.h. They are also called from the arm_pmu driver by the
CPU PM notifiers, which should also be executed with interrupts disabled. Should
we check here that the top level code respects these guarantees?

The armv8pmu_{start,stop}() functions are called from the irq handler, so we're
safe from preemption in this case. They are also called via
pmu->pmu_{enable,disable} callbacks, and I didn't find an explicit contract
regarding preemption in include/linux/perf_event.h. I've checked the other call
sites, and I didn't find any instances where they are called with preemption
enabled, which makes sense as we don't want to disable the PMU on a another CPU by
accident.

I would be inclined to add cant_sleep() calls to armv8pmu_{start,stop}(). In the
previous iteration, there were WARN_ONs in these functions, and Will said [1] they
can be removed because they are per-CPU operations. Will, what do you think about
adding the lockdep assertions?

[1] https://www.spinics.net/lists/arm-kernel/msg745161.html

Thanks,
Alex

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 3/7] arm64: perf: Remove PMU locking
@ 2020-06-18 10:51       ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-18 10:51 UTC (permalink / raw)
  To: Stephen Boyd, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, Peter Zijlstra, maz, Jiri Olsa,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, will

Hi,

On 6/17/20 9:17 PM, Stephen Boyd wrote:
> Quoting Alexandru Elisei (2020-06-17 04:38:47)
>> From: Julien Thierry <julien.thierry@arm.com>
>>
>> The PMU is disabled and enabled, and the counters are programmed from
>> contexts where interrupts or preemption is disabled.
>>
>> The functions to toggle the PMU and to program the PMU counters access the
>> registers directly and don't access data modified by the interrupt handler.
>> That, and the fact that they're always called from non-preemptible
>> contexts, means that we don't need to disable interrupts or use a spinlock.
> Maybe we should add a lockdep assertion that the code isn't preemptible?
> I.e. add a cant_sleep() call? Or is it more that we don't need locking
> because we're just doing register accesses and don't need to protect
> those accesses from each other?

It's both. The spinlocks were there to protect the functions from being preempted
and possibly migrated to another CPU, and from being interrupted by the PMU irq
handler.

There was no data race with the interrupt handler, but before the previous patch
("arm64: perf: Avoid PMXEV* indirection"), in order to read/write/program a
counter, one had to write the counter number to a counter selection register, and
then write/read the desired value from another register. This was done from both
the armv8pmu_{enable,disable}_event() functions and the irq handler, and the
spinlock was necessary. Now that we can access a counter using a single register
access, there's no need to protect the functions from being interrupted by the IRQ
handler. As for armv8pmu_{start,stop}(), they consist of one register write, so
it's also safe for the irq handler to interrupt them.

For the preemption part of the locking. The armv8pmu_{enable,disable}_event(),
when called by the perf core code via the pmu->{start,stop,add,del} callbacks, are
guaranteed to be called with IRQs and preemption disabled, as per the comment in
include/linux/perf_event.h. They are also called from the arm_pmu driver by the
CPU PM notifiers, which should also be executed with interrupts disabled. Should
we check here that the top level code respects these guarantees?

The armv8pmu_{start,stop}() functions are called from the irq handler, so we're
safe from preemption in this case. They are also called via
pmu->pmu_{enable,disable} callbacks, and I didn't find an explicit contract
regarding preemption in include/linux/perf_event.h. I've checked the other call
sites, and I didn't find any instances where they are called with preemption
enabled, which makes sense as we don't want to disable the PMU on a another CPU by
accident.

I would be inclined to add cant_sleep() calls to armv8pmu_{start,stop}(). In the
previous iteration, there were WARN_ONs in these functions, and Will said [1] they
can be removed because they are per-CPU operations. Will, what do you think about
adding the lockdep assertions?

[1] https://www.spinics.net/lists/arm-kernel/msg745161.html

Thanks,
Alex

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 2/7] arm64: perf: Avoid PMXEV* indirection
  2020-06-18 10:51       ` Alexandru Elisei
@ 2020-06-19  8:26         ` Stephen Boyd
  -1 siblings, 0 replies; 45+ messages in thread
From: Stephen Boyd @ 2020-06-19  8:26 UTC (permalink / raw)
  To: Alexandru Elisei, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, Peter Zijlstra, maz, Jiri Olsa,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, will, Julien Thierry

Quoting Alexandru Elisei (2020-06-18 03:51:08)
> On 6/17/20 9:11 PM, Stephen Boyd wrote:
> > Quoting Alexandru Elisei (2020-06-17 04:38:46)
> >> From: Mark Rutland <mark.rutland@arm.com>
> >> @@ -433,8 +494,9 @@ static u64 armv8pmu_read_counter(struct perf_event *event)
> >>  
> >>  static inline void armv8pmu_write_evcntr(int idx, u64 value)
> >>  {
> >> -       armv8pmu_select_counter(idx);
> >> -       write_sysreg(value, pmxevcntr_el0);
> >> +       u32 counter = ARMV8_IDX_TO_COUNTER(idx);
> > Might be a good idea to make ARMV8_IDX_TO_COUNTER a static inline
> > function that has a return type of u32. I had to go check the code to
> > make sure it wasn't something larger.
> 
> Architecturally, there are at most 32 counter registers, which would fit in an s8,
> so I don't think type checking would really help us here.

Ok. It would have saved me a few seconds while reading the code, but I
guess if I hold the architecture in my head I'll be ok too.

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 2/7] arm64: perf: Avoid PMXEV* indirection
@ 2020-06-19  8:26         ` Stephen Boyd
  0 siblings, 0 replies; 45+ messages in thread
From: Stephen Boyd @ 2020-06-19  8:26 UTC (permalink / raw)
  To: Alexandru Elisei, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, will, Julien Thierry, Peter Zijlstra, maz,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, Jiri Olsa,
	Julien Thierry

Quoting Alexandru Elisei (2020-06-18 03:51:08)
> On 6/17/20 9:11 PM, Stephen Boyd wrote:
> > Quoting Alexandru Elisei (2020-06-17 04:38:46)
> >> From: Mark Rutland <mark.rutland@arm.com>
> >> @@ -433,8 +494,9 @@ static u64 armv8pmu_read_counter(struct perf_event *event)
> >>  
> >>  static inline void armv8pmu_write_evcntr(int idx, u64 value)
> >>  {
> >> -       armv8pmu_select_counter(idx);
> >> -       write_sysreg(value, pmxevcntr_el0);
> >> +       u32 counter = ARMV8_IDX_TO_COUNTER(idx);
> > Might be a good idea to make ARMV8_IDX_TO_COUNTER a static inline
> > function that has a return type of u32. I had to go check the code to
> > make sure it wasn't something larger.
> 
> Architecturally, there are at most 32 counter registers, which would fit in an s8,
> so I don't think type checking would really help us here.

Ok. It would have saved me a few seconds while reading the code, but I
guess if I hold the architecture in my head I'll be ok too.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 3/7] arm64: perf: Remove PMU locking
  2020-06-18 10:51       ` Alexandru Elisei
@ 2020-06-19  8:29         ` Stephen Boyd
  -1 siblings, 0 replies; 45+ messages in thread
From: Stephen Boyd @ 2020-06-19  8:29 UTC (permalink / raw)
  To: Alexandru Elisei, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, will, Julien Thierry, Peter Zijlstra, maz,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, Jiri Olsa

Quoting Alexandru Elisei (2020-06-18 03:51:31)
> Hi,
> 
> On 6/17/20 9:17 PM, Stephen Boyd wrote:
> > Quoting Alexandru Elisei (2020-06-17 04:38:47)
> >> From: Julien Thierry <julien.thierry@arm.com>
> >>
> >> The PMU is disabled and enabled, and the counters are programmed from
> >> contexts where interrupts or preemption is disabled.
> >>
> >> The functions to toggle the PMU and to program the PMU counters access the
> >> registers directly and don't access data modified by the interrupt handler.
> >> That, and the fact that they're always called from non-preemptible
> >> contexts, means that we don't need to disable interrupts or use a spinlock.
> > Maybe we should add a lockdep assertion that the code isn't preemptible?
> > I.e. add a cant_sleep() call? Or is it more that we don't need locking
> > because we're just doing register accesses and don't need to protect
> > those accesses from each other?
> 
> It's both. The spinlocks were there to protect the functions from being preempted
> and possibly migrated to another CPU, and from being interrupted by the PMU irq
> handler.
> 
> There was no data race with the interrupt handler, but before the previous patch
> ("arm64: perf: Avoid PMXEV* indirection"), in order to read/write/program a
> counter, one had to write the counter number to a counter selection register, and
> then write/read the desired value from another register. This was done from both
> the armv8pmu_{enable,disable}_event() functions and the irq handler, and the
> spinlock was necessary. Now that we can access a counter using a single register
> access, there's no need to protect the functions from being interrupted by the IRQ
> handler. As for armv8pmu_{start,stop}(), they consist of one register write, so
> it's also safe for the irq handler to interrupt them.
> 
> For the preemption part of the locking. The armv8pmu_{enable,disable}_event(),
> when called by the perf core code via the pmu->{start,stop,add,del} callbacks, are
> guaranteed to be called with IRQs and preemption disabled, as per the comment in
> include/linux/perf_event.h. They are also called from the arm_pmu driver by the
> CPU PM notifiers, which should also be executed with interrupts disabled. Should
> we check here that the top level code respects these guarantees?
> 
> The armv8pmu_{start,stop}() functions are called from the irq handler, so we're
> safe from preemption in this case. They are also called via
> pmu->pmu_{enable,disable} callbacks, and I didn't find an explicit contract
> regarding preemption in include/linux/perf_event.h. I've checked the other call
> sites, and I didn't find any instances where they are called with preemption
> enabled, which makes sense as we don't want to disable the PMU on a another CPU by
> accident.

If they're all callbacks then it's overkill to add this. Presumably it
is better to enforce this wherever the callbacks are called from so as
to not litter the callee with random cant_sleep() calls. Probably best
to ignore my suggestion.

> 
> I would be inclined to add cant_sleep() calls to armv8pmu_{start,stop}(). In the
> previous iteration, there were WARN_ONs in these functions, and Will said [1] they
> can be removed because they are per-CPU operations. Will, what do you think about
> adding the lockdep assertions?
> 
> [1] https://www.spinics.net/lists/arm-kernel/msg745161.html
> 

If I read it correctly Will is saying the same thing in that thread.

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 3/7] arm64: perf: Remove PMU locking
@ 2020-06-19  8:29         ` Stephen Boyd
  0 siblings, 0 replies; 45+ messages in thread
From: Stephen Boyd @ 2020-06-19  8:29 UTC (permalink / raw)
  To: Alexandru Elisei, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, Peter Zijlstra, maz, Jiri Olsa,
	Will Deacon, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Ingo Molnar, catalin.marinas, Namhyung Kim, will

Quoting Alexandru Elisei (2020-06-18 03:51:31)
> Hi,
> 
> On 6/17/20 9:17 PM, Stephen Boyd wrote:
> > Quoting Alexandru Elisei (2020-06-17 04:38:47)
> >> From: Julien Thierry <julien.thierry@arm.com>
> >>
> >> The PMU is disabled and enabled, and the counters are programmed from
> >> contexts where interrupts or preemption is disabled.
> >>
> >> The functions to toggle the PMU and to program the PMU counters access the
> >> registers directly and don't access data modified by the interrupt handler.
> >> That, and the fact that they're always called from non-preemptible
> >> contexts, means that we don't need to disable interrupts or use a spinlock.
> > Maybe we should add a lockdep assertion that the code isn't preemptible?
> > I.e. add a cant_sleep() call? Or is it more that we don't need locking
> > because we're just doing register accesses and don't need to protect
> > those accesses from each other?
> 
> It's both. The spinlocks were there to protect the functions from being preempted
> and possibly migrated to another CPU, and from being interrupted by the PMU irq
> handler.
> 
> There was no data race with the interrupt handler, but before the previous patch
> ("arm64: perf: Avoid PMXEV* indirection"), in order to read/write/program a
> counter, one had to write the counter number to a counter selection register, and
> then write/read the desired value from another register. This was done from both
> the armv8pmu_{enable,disable}_event() functions and the irq handler, and the
> spinlock was necessary. Now that we can access a counter using a single register
> access, there's no need to protect the functions from being interrupted by the IRQ
> handler. As for armv8pmu_{start,stop}(), they consist of one register write, so
> it's also safe for the irq handler to interrupt them.
> 
> For the preemption part of the locking. The armv8pmu_{enable,disable}_event(),
> when called by the perf core code via the pmu->{start,stop,add,del} callbacks, are
> guaranteed to be called with IRQs and preemption disabled, as per the comment in
> include/linux/perf_event.h. They are also called from the arm_pmu driver by the
> CPU PM notifiers, which should also be executed with interrupts disabled. Should
> we check here that the top level code respects these guarantees?
> 
> The armv8pmu_{start,stop}() functions are called from the irq handler, so we're
> safe from preemption in this case. They are also called via
> pmu->pmu_{enable,disable} callbacks, and I didn't find an explicit contract
> regarding preemption in include/linux/perf_event.h. I've checked the other call
> sites, and I didn't find any instances where they are called with preemption
> enabled, which makes sense as we don't want to disable the PMU on a another CPU by
> accident.

If they're all callbacks then it's overkill to add this. Presumably it
is better to enforce this wherever the callbacks are called from so as
to not litter the callee with random cant_sleep() calls. Probably best
to ignore my suggestion.

> 
> I would be inclined to add cant_sleep() calls to armv8pmu_{start,stop}(). In the
> previous iteration, there were WARN_ONs in these functions, and Will said [1] they
> can be removed because they are per-CPU operations. Will, what do you think about
> adding the lockdep assertions?
> 
> [1] https://www.spinics.net/lists/arm-kernel/msg745161.html
> 

If I read it correctly Will is saying the same thing in that thread.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 6/7] arm_pmu: Introduce pmu_irq_ops
  2020-06-18 10:51       ` Alexandru Elisei
@ 2020-06-19  8:33         ` Stephen Boyd
  -1 siblings, 0 replies; 45+ messages in thread
From: Stephen Boyd @ 2020-06-19  8:33 UTC (permalink / raw)
  To: Alexandru Elisei, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, maz, Will Deacon, catalin.marinas,
	will, Julien Thierry

Quoting Alexandru Elisei (2020-06-18 03:51:24)
> Hi,
> 
> On 6/17/20 9:23 PM, Stephen Boyd wrote:
> > Quoting Alexandru Elisei (2020-06-17 04:38:50)
> >> diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
> >> index df352b334ea7..17e5952d21e4 100644
> >> --- a/drivers/perf/arm_pmu.c
> >> +++ b/drivers/perf/arm_pmu.c
> >> @@ -26,8 +26,46 @@
> >>  
> >>  #include <asm/irq_regs.h>
> >>  
> >> +static int armpmu_count_irq_users(const int irq);
> >> +
> >> +struct pmu_irq_ops {
> >> +       void (*enable_pmuirq)(unsigned int irq);
> >> +       void (*disable_pmuirq)(unsigned int irq);
> >> +       void (*free_pmuirq)(unsigned int irq, int cpu, void __percpu *devid);
> > Does 'cpu' need to be signed?
> 
> I'm not sure what you mean. The cpu argument comes from
> drivers/perf/arm_pmu_platform.c::armpmu_free_irqs -> arpmu_free_irq, where is the
> iterator variable used by the macro for_each_cpu. The documentation for the macro
> states:
> 
> /**
> * for_each_cpu - iterate over every cpu in a mask
> * @cpu: the (optionally unsigned) integer iterator ^^^^^^^^^^^^^^^^^^^
> 
> I could write a patch to convert to an unsigned int, but it seems like unnecessary
> churn to me.

Ok. It would be nice to make it unsigned in the arm_pmu_platform.c file.
Not required for this patch series.

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 6/7] arm_pmu: Introduce pmu_irq_ops
@ 2020-06-19  8:33         ` Stephen Boyd
  0 siblings, 0 replies; 45+ messages in thread
From: Stephen Boyd @ 2020-06-19  8:33 UTC (permalink / raw)
  To: Alexandru Elisei, linux-arm-kernel, linux-kernel
  Cc: mark.rutland, Julien Thierry, maz, Will Deacon, catalin.marinas,
	will, Julien Thierry

Quoting Alexandru Elisei (2020-06-18 03:51:24)
> Hi,
> 
> On 6/17/20 9:23 PM, Stephen Boyd wrote:
> > Quoting Alexandru Elisei (2020-06-17 04:38:50)
> >> diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
> >> index df352b334ea7..17e5952d21e4 100644
> >> --- a/drivers/perf/arm_pmu.c
> >> +++ b/drivers/perf/arm_pmu.c
> >> @@ -26,8 +26,46 @@
> >>  
> >>  #include <asm/irq_regs.h>
> >>  
> >> +static int armpmu_count_irq_users(const int irq);
> >> +
> >> +struct pmu_irq_ops {
> >> +       void (*enable_pmuirq)(unsigned int irq);
> >> +       void (*disable_pmuirq)(unsigned int irq);
> >> +       void (*free_pmuirq)(unsigned int irq, int cpu, void __percpu *devid);
> > Does 'cpu' need to be signed?
> 
> I'm not sure what you mean. The cpu argument comes from
> drivers/perf/arm_pmu_platform.c::armpmu_free_irqs -> arpmu_free_irq, where is the
> iterator variable used by the macro for_each_cpu. The documentation for the macro
> states:
> 
> /**
> * for_each_cpu - iterate over every cpu in a mask
> * @cpu: the (optionally unsigned) integer iterator ^^^^^^^^^^^^^^^^^^^
> 
> I could write a patch to convert to an unsigned int, but it seems like unnecessary
> churn to me.

Ok. It would be nice to make it unsigned in the arm_pmu_platform.c file.
Not required for this patch series.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK
  2020-06-17 11:38   ` Alexandru Elisei
  (?)
@ 2020-06-22 14:19   ` Mark Rutland
  2020-06-23  9:47     ` Alexandru Elisei
  -1 siblings, 1 reply; 45+ messages in thread
From: Mark Rutland @ 2020-06-22 14:19 UTC (permalink / raw)
  To: Alexandru Elisei
  Cc: linux-arm-kernel, linux-kernel, maz, will, catalin.marinas,
	Julien Thierry, Julien Thierry, Will Deacon, Peter Zijlstra,
	Ingo Molnar, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim

On Wed, Jun 17, 2020 at 12:38:48PM +0100, Alexandru Elisei wrote:
> From: Julien Thierry <julien.thierry@arm.com>
> 
> perf_event_overflow() can queue an irq_work on the current PE, which is
> executed via an IPI. Move the processing of the irq_work from the PMU IRQ
> handler to the IPI handler, which gets executed immediately afterwards.
> 
> This also makes the IRQ handler NMI safe, because it removes the call to
> irq_work_run().

It wasn't entirely clear to me what the situation was today, and why
this was sound. How about the following to spell that out more
explicitly:

| When handling events armv8pmu_handle_irq() calls
| perf_event_overflow(), and subsequently calls irq_work_run() to handle
| any work queued by perf_event_overflow(). As perf_event_overflow()
| raises IPI_IRQ_WORK when queing the work, this isn't strictly
| necessary and the work could be handled as part of the IPI_IRQ_WORK
| handler.
|
| In the common case the IPI handler will run immediately after the PMU
| IRQ handler, and where the PE is heavily loaded with interrupts other
| handlers may run first, widening the window where some counters are
| disabled.
|
| In practice this window is unlikely to be a significant issue, and
| removing the call to irq_work_run() would make the PMU IRQ handler NMI
| safe in addition to making it simpler, so let's do that.

Thanks,
Mark.

> 
> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
> [Reworded commit]
> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
> ---
>  arch/arm64/kernel/perf_event.c | 14 +++++---------
>  1 file changed, 5 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index a6195022be7d..cf1d92030790 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -750,20 +750,16 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
>  		if (!armpmu_event_set_period(event))
>  			continue;
>  
> +		/*
> +		 * Perf event overflow will queue the processing of the event as
> +		 * an irq_work which will be taken care of in the handling of
> +		 * IPI_IRQ_WORK.
> +		 */
>  		if (perf_event_overflow(event, &data, regs))
>  			cpu_pmu->disable(event);
>  	}
>  	armv8pmu_start(cpu_pmu);
>  
> -	/*
> -	 * Handle the pending perf events.
> -	 *
> -	 * Note: this call *must* be run with interrupts disabled. For
> -	 * platforms that can have the PMU interrupts raised as an NMI, this
> -	 * will not work.
> -	 */
> -	irq_work_run();
> -
>  	return IRQ_HANDLED;
>  }
>  
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK
  2020-06-22 14:19   ` Mark Rutland
@ 2020-06-23  9:47     ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-23  9:47 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, maz, will, catalin.marinas,
	Julien Thierry, Julien Thierry, Will Deacon, Peter Zijlstra,
	Ingo Molnar, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim

Hi Mark,

On 6/22/20 3:19 PM, Mark Rutland wrote:
> On Wed, Jun 17, 2020 at 12:38:48PM +0100, Alexandru Elisei wrote:
>> From: Julien Thierry <julien.thierry@arm.com>
>>
>> perf_event_overflow() can queue an irq_work on the current PE, which is
>> executed via an IPI. Move the processing of the irq_work from the PMU IRQ
>> handler to the IPI handler, which gets executed immediately afterwards.
>>
>> This also makes the IRQ handler NMI safe, because it removes the call to
>> irq_work_run().
> It wasn't entirely clear to me what the situation was today, and why
> this was sound. How about the following to spell that out more
> explicitly:
>
> | When handling events armv8pmu_handle_irq() calls
> | perf_event_overflow(), and subsequently calls irq_work_run() to handle
> | any work queued by perf_event_overflow(). As perf_event_overflow()
> | raises IPI_IRQ_WORK when queing the work, this isn't strictly
> | necessary and the work could be handled as part of the IPI_IRQ_WORK
> | handler.
> |
> | In the common case the IPI handler will run immediately after the PMU
> | IRQ handler, and where the PE is heavily loaded with interrupts other
> | handlers may run first, widening the window where some counters are
> | disabled.
> |
> | In practice this window is unlikely to be a significant issue, and
> | removing the call to irq_work_run() would make the PMU IRQ handler NMI
> | safe in addition to making it simpler, so let's do that.
>
> Thanks,
> Mark.

That is much better than my commit message, I will definitely update it with your
suggestion.

Thanks,
Alex
>
>> Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
>> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
>> Cc: Jiri Olsa <jolsa@redhat.com>
>> Cc: Namhyung Kim <namhyung@kernel.org>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
>> [Reworded commit]
>> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
>> ---
>>  arch/arm64/kernel/perf_event.c | 14 +++++---------
>>  1 file changed, 5 insertions(+), 9 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
>> index a6195022be7d..cf1d92030790 100644
>> --- a/arch/arm64/kernel/perf_event.c
>> +++ b/arch/arm64/kernel/perf_event.c
>> @@ -750,20 +750,16 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
>>  		if (!armpmu_event_set_period(event))
>>  			continue;
>>  
>> +		/*
>> +		 * Perf event overflow will queue the processing of the event as
>> +		 * an irq_work which will be taken care of in the handling of
>> +		 * IPI_IRQ_WORK.
>> +		 */
>>  		if (perf_event_overflow(event, &data, regs))
>>  			cpu_pmu->disable(event);
>>  	}
>>  	armv8pmu_start(cpu_pmu);
>>  
>> -	/*
>> -	 * Handle the pending perf events.
>> -	 *
>> -	 * Note: this call *must* be run with interrupts disabled. For
>> -	 * platforms that can have the PMU interrupts raised as an NMI, this
>> -	 * will not work.
>> -	 */
>> -	irq_work_run();
>> -
>>  	return IRQ_HANDLED;
>>  }
>>  
>> -- 
>> 2.27.0
>>

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 0/7] arm_pmu: Use NMI for perf interrupt
  2020-06-17 11:38 ` Alexandru Elisei
@ 2020-06-25 15:11   ` Alexandru Elisei
  -1 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-25 15:11 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel; +Cc: mark.rutland, maz, will, catalin.marinas

Hi,

On 6/17/20 12:38 PM, Alexandru Elisei wrote:
> The series makes the arm_pmu driver use NMIs for the perf interrupt when
> NMIs are available on the platform (currently, only arm64 + GICv3). To make
> it easier to play with the patches, I've pushed a branch at [1]:
>
> $ git clone -b pmu-nmi-v5 git://linux-arm.org/linux-ae

For people who wanted to test the series, but couldn't because their firmware was
setting SCR_EL3.FIQ, I pushed some patches [1] to fix that. Now pseudo-NMIs work
on all GICv3 security + SCR_EL3.FIQ combinations.

[1] https://lists.infradead.org/pipermail/linux-arm-kernel/2020-June/580143.html

Thanks,
Alex

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 0/7] arm_pmu: Use NMI for perf interrupt
@ 2020-06-25 15:11   ` Alexandru Elisei
  0 siblings, 0 replies; 45+ messages in thread
From: Alexandru Elisei @ 2020-06-25 15:11 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel; +Cc: mark.rutland, maz, will, catalin.marinas

Hi,

On 6/17/20 12:38 PM, Alexandru Elisei wrote:
> The series makes the arm_pmu driver use NMIs for the perf interrupt when
> NMIs are available on the platform (currently, only arm64 + GICv3). To make
> it easier to play with the patches, I've pushed a branch at [1]:
>
> $ git clone -b pmu-nmi-v5 git://linux-arm.org/linux-ae

For people who wanted to test the series, but couldn't because their firmware was
setting SCR_EL3.FIQ, I pushed some patches [1] to fix that. Now pseudo-NMIs work
on all GICv3 security + SCR_EL3.FIQ combinations.

[1] https://lists.infradead.org/pipermail/linux-arm-kernel/2020-June/580143.html

Thanks,
Alex

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 3/7] arm64: perf: Remove PMU locking
  2020-06-19  8:29         ` Stephen Boyd
@ 2020-07-03 13:45           ` Will Deacon
  -1 siblings, 0 replies; 45+ messages in thread
From: Will Deacon @ 2020-07-03 13:45 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: Alexandru Elisei, linux-arm-kernel, linux-kernel, mark.rutland,
	Julien Thierry, Peter Zijlstra, maz, Will Deacon,
	Arnaldo Carvalho de Melo, Alexander Shishkin, Ingo Molnar,
	catalin.marinas, Namhyung Kim, Jiri Olsa

On Fri, Jun 19, 2020 at 01:29:59AM -0700, Stephen Boyd wrote:
> Quoting Alexandru Elisei (2020-06-18 03:51:31)
> > The armv8pmu_{start,stop}() functions are called from the irq handler, so we're
> > safe from preemption in this case. They are also called via
> > pmu->pmu_{enable,disable} callbacks, and I didn't find an explicit contract
> > regarding preemption in include/linux/perf_event.h. I've checked the other call
> > sites, and I didn't find any instances where they are called with preemption
> > enabled, which makes sense as we don't want to disable the PMU on a another CPU by
> > accident.
> 
> If they're all callbacks then it's overkill to add this. Presumably it
> is better to enforce this wherever the callbacks are called from so as
> to not litter the callee with random cant_sleep() calls. Probably best
> to ignore my suggestion.
> 
> > 
> > I would be inclined to add cant_sleep() calls to armv8pmu_{start,stop}(). In the
> > previous iteration, there were WARN_ONs in these functions, and Will said [1] they
> > can be removed because they are per-CPU operations. Will, what do you think about
> > adding the lockdep assertions?
> > 
> > [1] https://www.spinics.net/lists/arm-kernel/msg745161.html
> > 
> 
> If I read it correctly Will is saying the same thing in that thread.

Right, in the cases where perf core already relies on things not being
preemptible, we don't need to add extra checks.

Will

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v5 3/7] arm64: perf: Remove PMU locking
@ 2020-07-03 13:45           ` Will Deacon
  0 siblings, 0 replies; 45+ messages in thread
From: Will Deacon @ 2020-07-03 13:45 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: mark.rutland, Julien Thierry, Peter Zijlstra, maz, Jiri Olsa,
	Will Deacon, linux-kernel, Arnaldo Carvalho de Melo,
	Alexander Shishkin, Ingo Molnar, catalin.marinas, Namhyung Kim,
	Alexandru Elisei, linux-arm-kernel

On Fri, Jun 19, 2020 at 01:29:59AM -0700, Stephen Boyd wrote:
> Quoting Alexandru Elisei (2020-06-18 03:51:31)
> > The armv8pmu_{start,stop}() functions are called from the irq handler, so we're
> > safe from preemption in this case. They are also called via
> > pmu->pmu_{enable,disable} callbacks, and I didn't find an explicit contract
> > regarding preemption in include/linux/perf_event.h. I've checked the other call
> > sites, and I didn't find any instances where they are called with preemption
> > enabled, which makes sense as we don't want to disable the PMU on a another CPU by
> > accident.
> 
> If they're all callbacks then it's overkill to add this. Presumably it
> is better to enforce this wherever the callbacks are called from so as
> to not litter the callee with random cant_sleep() calls. Probably best
> to ignore my suggestion.
> 
> > 
> > I would be inclined to add cant_sleep() calls to armv8pmu_{start,stop}(). In the
> > previous iteration, there were WARN_ONs in these functions, and Will said [1] they
> > can be removed because they are per-CPU operations. Will, what do you think about
> > adding the lockdep assertions?
> > 
> > [1] https://www.spinics.net/lists/arm-kernel/msg745161.html
> > 
> 
> If I read it correctly Will is saying the same thing in that thread.

Right, in the cases where perf core already relies on things not being
preemptible, we don't need to add extra checks.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 45+ messages in thread

end of thread, other threads:[~2020-07-03 13:46 UTC | newest]

Thread overview: 45+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-17 11:38 [PATCH v5 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
2020-06-17 11:38 ` Alexandru Elisei
2020-06-17 11:38 ` [PATCH v5 1/7] arm64: perf: Add missing ISB in armv8pmu_enable_event() Alexandru Elisei
2020-06-17 11:38   ` Alexandru Elisei
2020-06-17 20:01   ` Stephen Boyd
2020-06-17 20:01     ` Stephen Boyd
2020-06-18 10:50     ` Alexandru Elisei
2020-06-18 10:50       ` Alexandru Elisei
2020-06-17 11:38 ` [PATCH v5 2/7] arm64: perf: Avoid PMXEV* indirection Alexandru Elisei
2020-06-17 11:38   ` Alexandru Elisei
2020-06-17 20:11   ` Stephen Boyd
2020-06-17 20:11     ` Stephen Boyd
2020-06-18 10:51     ` Alexandru Elisei
2020-06-18 10:51       ` Alexandru Elisei
2020-06-19  8:26       ` Stephen Boyd
2020-06-19  8:26         ` Stephen Boyd
2020-06-17 11:38 ` [PATCH v5 3/7] arm64: perf: Remove PMU locking Alexandru Elisei
2020-06-17 11:38   ` Alexandru Elisei
2020-06-17 20:17   ` Stephen Boyd
2020-06-17 20:17     ` Stephen Boyd
2020-06-18 10:51     ` Alexandru Elisei
2020-06-18 10:51       ` Alexandru Elisei
2020-06-19  8:29       ` Stephen Boyd
2020-06-19  8:29         ` Stephen Boyd
2020-07-03 13:45         ` Will Deacon
2020-07-03 13:45           ` Will Deacon
2020-06-17 11:38 ` [PATCH v5 4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK Alexandru Elisei
2020-06-17 11:38   ` Alexandru Elisei
2020-06-22 14:19   ` Mark Rutland
2020-06-23  9:47     ` Alexandru Elisei
2020-06-17 11:38 ` [PATCH v5 5/7] arm64: kvm: pmu: Make overflow handler NMI safe Alexandru Elisei
2020-06-17 11:38   ` Alexandru Elisei
2020-06-17 11:38   ` Alexandru Elisei
2020-06-17 11:38 ` [PATCH v5 6/7] arm_pmu: Introduce pmu_irq_ops Alexandru Elisei
2020-06-17 11:38   ` Alexandru Elisei
2020-06-17 20:23   ` Stephen Boyd
2020-06-17 20:23     ` Stephen Boyd
2020-06-18 10:51     ` Alexandru Elisei
2020-06-18 10:51       ` Alexandru Elisei
2020-06-19  8:33       ` Stephen Boyd
2020-06-19  8:33         ` Stephen Boyd
2020-06-17 11:38 ` [PATCH v5 7/7] arm_pmu: arm64: Use NMIs for PMU Alexandru Elisei
2020-06-17 11:38   ` Alexandru Elisei
2020-06-25 15:11 ` [PATCH v5 0/7] arm_pmu: Use NMI for perf interrupt Alexandru Elisei
2020-06-25 15:11   ` Alexandru Elisei

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.