All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/4] powerpc/64s: interrupt speedups
@ 2021-08-16  7:29 Nicholas Piggin
  2021-08-16  7:29 ` [PATCH v1 1/4] powerpc/64: handle MSR EE and RI in interrupt entry wrapper Nicholas Piggin
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Nicholas Piggin @ 2021-08-16  7:29 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

Here's a few stragglers. The first patch was submitted already but had
some bugs with unrecoverable exceptions on HPT (current->blah being
accessed before MSR[RI] was enabled). Those should be fixed now.

The others are generally for helping asynch interrupts, which are a bit
harder to measure well but important for IO and IPIs.

After this series, the SPR accesses of the interrupt handlers for radix
are becoming pretty optimal except for PPR which we could improve on,
and virt CPU accounting which is very costly -- we might disable that
by default unless someone comes up with a good reason to keep it.

Thanks,
Nick

Nicholas Piggin (4):
  powerpc/64: handle MSR EE and RI in interrupt entry wrapper
  powerpc/64s/perf: add power_pmu_running to query whether perf is being
    used
  powerpc/64s/interrupt: Don't enable MSR[EE] in irq handlers unless
    perf is in use
  powerpc/64s/interrupt: avoid saving CFAR in some asynchronous
    interrupts

 arch/powerpc/include/asm/hw_irq.h    | 49 ++++++++++++---
 arch/powerpc/include/asm/interrupt.h | 31 ++++++++--
 arch/powerpc/kernel/dbell.c          |  3 +-
 arch/powerpc/kernel/exceptions-64s.S | 93 +++++++++++++++++++---------
 arch/powerpc/kernel/fpu.S            |  5 ++
 arch/powerpc/kernel/irq.c            |  3 +-
 arch/powerpc/kernel/time.c           | 30 ++++-----
 arch/powerpc/kernel/vector.S         |  8 +++
 arch/powerpc/perf/core-book3s.c      | 13 ++++
 9 files changed, 175 insertions(+), 60 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v1 1/4] powerpc/64: handle MSR EE and RI in interrupt entry wrapper
  2021-08-16  7:29 [PATCH v1 0/4] powerpc/64s: interrupt speedups Nicholas Piggin
@ 2021-08-16  7:29 ` Nicholas Piggin
  2021-08-16  7:29 ` [PATCH v1 2/4] powerpc/64s/perf: add power_pmu_running to query whether perf is being used Nicholas Piggin
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 11+ messages in thread
From: Nicholas Piggin @ 2021-08-16  7:29 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

Similarly to the system call change in the previous patch, the mtmsrd to
enable RI can be combined with the mtmsrd to enable EE for interrupts
which enable the latter, which tends to be the important synchronous
interrupts (i.e., page faults).

Do this by enabling EE and RI together at the beginning of the entry
wrapper if PACA_IRQ_HARD_DIS is clear, and just enabling RI if it is set
(which means something wanted EE=0).

Asynchronous interrupts set PACA_IRQ_HARD_DIS, but synchronous ones
leave it unchanged, so by default they always get EE=1 unless they
interrupt a caller that has hard disabled. When the sync interrupt
later calls interrupt_cond_local_irq_enable(), that will not require
another mtmsrd because we already enabled here.

64e is conceptually unchanged, but it also sets MSR[EE]=1 now in the
interrupt wrapper for synchronous interrupts with the same code.

On 64s, saves one mtmsrd L=1 for synchronous interrupts on 64s, which
saves about 20 cycles. For kernel-mode interrupts, both synchronous and
asynchronous, this saves an additional ~40 cycles due to the mtmsrd
being moved ahead of mfspr SPRN_AMR, which prevents a SPR scoreboard
stall (on POWER9).

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/interrupt.h | 31 ++++++++++++++++++++++++----
 arch/powerpc/kernel/exceptions-64s.S | 30 ---------------------------
 arch/powerpc/kernel/fpu.S            |  5 +++++
 arch/powerpc/kernel/vector.S         |  8 +++++++
 4 files changed, 40 insertions(+), 34 deletions(-)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 6b800d3e2681..e3228a911b35 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -148,9 +148,21 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
 #endif
 
 #ifdef CONFIG_PPC64
-	if (irq_soft_mask_set_return(IRQS_ALL_DISABLED) == IRQS_ENABLED)
+	bool trace_enable = false;
+
+	if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS)) {
+		if (irq_soft_mask_set_return(IRQS_DISABLED) == IRQS_ENABLED)
+			trace_enable = true;
+	} else {
+		irq_soft_mask_set(IRQS_DISABLED);
+	}
+	/* If the interrupt was taken with HARD_DIS set, don't enable MSR[EE] */
+	if (local_paca->irq_happened & PACA_IRQ_HARD_DIS)
+		__hard_RI_enable();
+	else
+		__hard_irq_enable();
+	if (trace_enable)
 		trace_hardirqs_off();
-	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
 
 	if (user_mode(regs)) {
 		CT_WARN_ON(ct_state() != CONTEXT_USER);
@@ -200,13 +212,20 @@ static inline void interrupt_exit_prepare(struct pt_regs *regs, struct interrupt
 
 static inline void interrupt_async_enter_prepare(struct pt_regs *regs, struct interrupt_state *state)
 {
+#ifdef CONFIG_PPC64
+	/* Ensure interrupt_enter_prepare does not enable MSR[EE] */
+	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
+#endif
+	interrupt_enter_prepare(regs, state);
 #ifdef CONFIG_PPC_BOOK3S_64
+	/*
+	 * MSR[RI] is only enabled after interrupt_enter_prepare, so this
+	 * thread flags access has to come afterward.
+	 */
 	if (cpu_has_feature(CPU_FTR_CTRL) &&
 	    !test_thread_local_flags(_TLF_RUNLATCH))
 		__ppc64_runlatch_on();
 #endif
-
-	interrupt_enter_prepare(regs, state);
 	irq_enter();
 }
 
@@ -273,6 +292,8 @@ static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct inte
 	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
 		BUG_ON(!arch_irq_disabled_regs(regs) && !(regs->msr & MSR_EE));
 
+	__hard_RI_enable();
+
 	/* Don't do any per-CPU operations until interrupt state is fixed */
 
 	if (nmi_disables_ftrace(regs)) {
@@ -370,6 +391,8 @@ interrupt_handler long func(struct pt_regs *regs)			\
 {									\
 	long ret;							\
 									\
+	__hard_RI_enable();						\
+									\
 	ret = ____##func (regs);					\
 									\
 	return ret;							\
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 4aec59a77d4c..69a472c38f62 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -113,7 +113,6 @@ name:
 #define IISIDE		.L_IISIDE_\name\()	/* Uses SRR0/1 not DAR/DSISR */
 #define IDAR		.L_IDAR_\name\()	/* Uses DAR (or SRR0) */
 #define IDSISR		.L_IDSISR_\name\()	/* Uses DSISR (or SRR1) */
-#define ISET_RI		.L_ISET_RI_\name\()	/* Run common code w/ MSR[RI]=1 */
 #define IBRANCH_TO_COMMON	.L_IBRANCH_TO_COMMON_\name\() /* ENTRY branch to common */
 #define IREALMODE_COMMON	.L_IREALMODE_COMMON_\name\() /* Common runs in realmode */
 #define IMASK		.L_IMASK_\name\()	/* IRQ soft-mask bit */
@@ -157,9 +156,6 @@ do_define_int n
 	.ifndef IDSISR
 		IDSISR=0
 	.endif
-	.ifndef ISET_RI
-		ISET_RI=1
-	.endif
 	.ifndef IBRANCH_TO_COMMON
 		IBRANCH_TO_COMMON=1
 	.endif
@@ -512,11 +508,6 @@ DEFINE_FIXED_SYMBOL(\name\()_common_real)
 	stb	r10,PACASRR_VALID(r13)
 	.endif
 
-	.if ISET_RI
-	li	r10,MSR_RI
-	mtmsrd	r10,1			/* Set MSR_RI */
-	.endif
-
 	.if ISTACK
 	.if IKUAP
 	kuap_save_amr_and_lock r9, r10, cr1, cr0
@@ -901,11 +892,6 @@ INT_DEFINE_BEGIN(system_reset)
 	IVEC=0x100
 	IAREA=PACA_EXNMI
 	IVIRT=0 /* no virt entry point */
-	/*
-	 * MSR_RI is not enabled, because PACA_EXNMI and nmi stack is
-	 * being used, so a nested NMI exception would corrupt it.
-	 */
-	ISET_RI=0
 	ISTACK=0
 	IKVM_REAL=1
 INT_DEFINE_END(system_reset)
@@ -986,8 +972,6 @@ EXC_COMMON_BEGIN(system_reset_common)
 	lhz	r10,PACA_IN_NMI(r13)
 	addi	r10,r10,1
 	sth	r10,PACA_IN_NMI(r13)
-	li	r10,MSR_RI
-	mtmsrd 	r10,1
 
 	mr	r10,r1
 	ld	r1,PACA_NMI_EMERG_SP(r13)
@@ -1061,12 +1045,6 @@ INT_DEFINE_BEGIN(machine_check_early)
 	IAREA=PACA_EXMC
 	IVIRT=0 /* no virt entry point */
 	IREALMODE_COMMON=1
-	/*
-	 * MSR_RI is not enabled, because PACA_EXMC is being used, so a
-	 * nested machine check corrupts it. machine_check_common enables
-	 * MSR_RI.
-	 */
-	ISET_RI=0
 	ISTACK=0
 	IDAR=1
 	IDSISR=1
@@ -1077,7 +1055,6 @@ INT_DEFINE_BEGIN(machine_check)
 	IVEC=0x200
 	IAREA=PACA_EXMC
 	IVIRT=0 /* no virt entry point */
-	ISET_RI=0
 	IDAR=1
 	IDSISR=1
 	IKVM_REAL=1
@@ -1147,9 +1124,6 @@ EXC_COMMON_BEGIN(machine_check_early_common)
 BEGIN_FTR_SECTION
 	bl	enable_machine_check
 END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
-	li	r10,MSR_RI
-	mtmsrd	r10,1
-
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	machine_check_early
 	std	r3,RESULT(r1)	/* Save result */
@@ -1237,10 +1211,6 @@ EXC_COMMON_BEGIN(machine_check_common)
 	 * save area: PACA_EXMC instead of PACA_EXGEN.
 	 */
 	GEN_COMMON machine_check
-
-	/* Enable MSR_RI when finished with PACA_EXMC */
-	li	r10,MSR_RI
-	mtmsrd 	r10,1
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	machine_check_exception
 	b	interrupt_return_srr
diff --git a/arch/powerpc/kernel/fpu.S b/arch/powerpc/kernel/fpu.S
index 6010adcee16e..eabd578cb772 100644
--- a/arch/powerpc/kernel/fpu.S
+++ b/arch/powerpc/kernel/fpu.S
@@ -81,7 +81,12 @@ EXPORT_SYMBOL(store_fp_state)
  */
 _GLOBAL(load_up_fpu)
 	mfmsr	r5
+#ifdef CONFIG_PPC_BOOK3S_64
+	/* interrupt doesn't set MSR[RI] and HPT can fault on current access */
+	ori	r5,r5,MSR_FP|MSR_RI
+#else
 	ori	r5,r5,MSR_FP
+#endif
 #ifdef CONFIG_VSX
 BEGIN_FTR_SECTION
 	oris	r5,r5,MSR_VSX@h
diff --git a/arch/powerpc/kernel/vector.S b/arch/powerpc/kernel/vector.S
index fc120fac1910..ead2900d9bb0 100644
--- a/arch/powerpc/kernel/vector.S
+++ b/arch/powerpc/kernel/vector.S
@@ -47,6 +47,10 @@ EXPORT_SYMBOL(store_vr_state)
  */
 _GLOBAL(load_up_altivec)
 	mfmsr	r5			/* grab the current MSR */
+#ifdef CONFIG_PPC_BOOK3S_64
+	/* interrupt doesn't set MSR[RI] and HPT can fault on current access */
+	ori	r5,r5,MSR_RI
+#endif
 	oris	r5,r5,MSR_VEC@h
 	MTMSRD(r5)			/* enable use of AltiVec now */
 	isync
@@ -128,6 +132,10 @@ _GLOBAL(load_up_vsx)
 	andis.	r5,r12,MSR_VEC@h
 	beql+	load_up_altivec		/* skip if already loaded */
 
+	/* interrupt doesn't set MSR[RI] and HPT can fault on current access */
+	li	r5,MSR_RI
+	mtmsrd	r5,1
+
 	ld	r4,PACACURRENT(r13)
 	addi	r4,r4,THREAD		/* Get THREAD */
 	li	r6,1
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v1 2/4] powerpc/64s/perf: add power_pmu_running to query whether perf is being used
  2021-08-16  7:29 [PATCH v1 0/4] powerpc/64s: interrupt speedups Nicholas Piggin
  2021-08-16  7:29 ` [PATCH v1 1/4] powerpc/64: handle MSR EE and RI in interrupt entry wrapper Nicholas Piggin
@ 2021-08-16  7:29 ` Nicholas Piggin
  2021-08-16 18:19     ` kernel test robot
                     ` (2 more replies)
  2021-08-16  7:29 ` [PATCH v1 3/4] powerpc/64s/interrupt: Don't enable MSR[EE] in irq handlers unless perf is in use Nicholas Piggin
  2021-08-16  7:29 ` [PATCH v1 4/4] powerpc/64s/interrupt: avoid saving CFAR in some asynchronous interrupts Nicholas Piggin
  3 siblings, 3 replies; 11+ messages in thread
From: Nicholas Piggin @ 2021-08-16  7:29 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Athira Rajeev, Madhavan Srinivasan, Nicholas Piggin

Interrupt handling code would like to know whether perf is enabled, to
know whether it should enable MSR[EE] to improve PMI coverage.

Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/hw_irq.h |  2 ++
 arch/powerpc/perf/core-book3s.c   | 13 +++++++++++++
 2 files changed, 15 insertions(+)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 21cc571ea9c2..2d5c0d3ccbb6 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -306,6 +306,8 @@ static inline bool lazy_irq_pending_nocheck(void)
 	return __lazy_irq_pending(local_paca->irq_happened);
 }
 
+bool power_pmu_running(void);
+
 /*
  * This is called by asynchronous interrupts to conditionally
  * re-enable hard interrupts after having cleared the source
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
index bb0ee716de91..76114a9afb2b 100644
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -2380,6 +2380,19 @@ static void perf_event_interrupt(struct pt_regs *regs)
 	perf_sample_event_took(sched_clock() - start_clock);
 }
 
+bool power_pmu_running(void)
+{
+	struct cpu_hw_events *cpuhw;
+
+	/* Could this simply test local_paca->pmcregs_in_use? */
+
+	if (!ppmu)
+		return false;
+
+	cpuhw = this_cpu_ptr(&cpu_hw_events);
+	return cpuhw->n_events;
+}
+
 static int power_pmu_prepare_cpu(unsigned int cpu)
 {
 	struct cpu_hw_events *cpuhw = &per_cpu(cpu_hw_events, cpu);
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v1 3/4] powerpc/64s/interrupt: Don't enable MSR[EE] in irq handlers unless perf is in use
  2021-08-16  7:29 [PATCH v1 0/4] powerpc/64s: interrupt speedups Nicholas Piggin
  2021-08-16  7:29 ` [PATCH v1 1/4] powerpc/64: handle MSR EE and RI in interrupt entry wrapper Nicholas Piggin
  2021-08-16  7:29 ` [PATCH v1 2/4] powerpc/64s/perf: add power_pmu_running to query whether perf is being used Nicholas Piggin
@ 2021-08-16  7:29 ` Nicholas Piggin
  2021-08-16  7:29 ` [PATCH v1 4/4] powerpc/64s/interrupt: avoid saving CFAR in some asynchronous interrupts Nicholas Piggin
  3 siblings, 0 replies; 11+ messages in thread
From: Nicholas Piggin @ 2021-08-16  7:29 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Athira Rajeev, Madhavan Srinivasan, Nicholas Piggin

Enabling MSR[EE] in interrupt handlers while interrupts are still soft
masked allows PMIs to profile interrupt handlers to some degree, beyond
what SIAR latching allows.

When perf is not being used, this is almost useless work. It requires an
extra mtmsrd in the irq handler, and it also opens the door to masked
interrupts hitting and requiring replay, which is more expensive than
just taking them directly. This effect can be noticable in high IRQ
workloads.

Avoid enabling MSR[EE] unless perf is currently in use. This saves about
60 cycles (or 8%) on a simple decrementer interrupt microbenchmark.
Replayed interrupts drop from 1.4% of interrupts to 0.003%.

This does prevent the soft-nmi interrupt being taken in these handlers,
but that's not too reliable anyway. The SMP watchdog will continue to be
the reliable way to catch lockups.

Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/hw_irq.h | 47 +++++++++++++++++++++++++------
 arch/powerpc/kernel/dbell.c       |  3 +-
 arch/powerpc/kernel/irq.c         |  3 +-
 arch/powerpc/kernel/time.c        | 30 ++++++++++----------
 4 files changed, 57 insertions(+), 26 deletions(-)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 2d5c0d3ccbb6..e6644509c7af 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -309,17 +309,46 @@ static inline bool lazy_irq_pending_nocheck(void)
 bool power_pmu_running(void);
 
 /*
- * This is called by asynchronous interrupts to conditionally
- * re-enable hard interrupts after having cleared the source
- * of the interrupt. They are kept disabled if there is a different
- * soft-masked interrupt pending that requires hard masking.
+ * This is called by asynchronous interrupts to check whether to
+ * conditionally re-enable hard interrupts after having cleared
+ * the source of the interrupt. They are kept disabled if there
+ * is a different soft-masked interrupt pending that requires hard
+ * masking.
  */
-static inline void may_hard_irq_enable(void)
+static inline bool may_hard_irq_enable(void)
 {
-	if (!(get_paca()->irq_happened & PACA_IRQ_MUST_HARD_MASK)) {
-		get_paca()->irq_happened &= ~PACA_IRQ_HARD_DIS;
-		__hard_irq_enable();
-	}
+#ifdef CONFIG_PPC_IRQ_SOFT_MASK_DEBUG
+	BUG_ON(mfmsr() & MSR_EE);
+#endif
+#ifdef CONFIG_PERF_EVENTS
+	if (!power_pmu_running())
+		return false;
+
+	if (get_paca()->irq_happened & PACA_IRQ_MUST_HARD_MASK)
+		return false;
+
+	return true;
+#else
+	return false;
+#endif
+}
+
+/*
+ * Do the hard enabling, only call this if may_hard_irq_enable is true.
+ */
+static inline void do_hard_irq_enable(void)
+{
+#ifdef CONFIG_PPC_IRQ_SOFT_MASK_DEBUG
+	WARN_ON(irq_soft_mask_return() != IRQS_ALL_DISABLED);
+	WARN_ON(get_paca()->irq_happened & PACA_IRQ_MUST_HARD_MASK);
+	BUG_ON(mfmsr() & MSR_EE);
+#endif
+	/*
+	 * This allows PMI interrupts (and watchdog soft-NMIs) through.
+	 * There is no other reason to enable this way.
+	 */
+	get_paca()->irq_happened &= ~PACA_IRQ_HARD_DIS;
+	__hard_irq_enable();
 }
 
 static inline bool arch_irq_disabled_regs(struct pt_regs *regs)
diff --git a/arch/powerpc/kernel/dbell.c b/arch/powerpc/kernel/dbell.c
index 5545c9cd17c1..0edeb5e9fede 100644
--- a/arch/powerpc/kernel/dbell.c
+++ b/arch/powerpc/kernel/dbell.c
@@ -27,7 +27,8 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(doorbell_exception)
 
 	ppc_msgsync();
 
-	may_hard_irq_enable();
+	if (may_hard_irq_enable())
+		do_hard_irq_enable();
 
 	kvmppc_clear_host_ipi(smp_processor_id());
 	__this_cpu_inc(irq_stat.doorbell_irqs);
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 551b653228c4..745becbcd1ad 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -739,7 +739,8 @@ void __do_irq(struct pt_regs *regs)
 	irq = ppc_md.get_irq();
 
 	/* We can hard enable interrupts now to allow perf interrupts */
-	may_hard_irq_enable();
+	if (may_hard_irq_enable())
+		do_hard_irq_enable();
 
 	/* And finally process it */
 	if (unlikely(!irq))
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index c487ba5a6e11..ac67ec57f129 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -567,22 +567,22 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(timer_interrupt)
 		return;
 	}
 
-	/* Ensure a positive value is written to the decrementer, or else
-	 * some CPUs will continue to take decrementer exceptions. When the
-	 * PPC_WATCHDOG (decrementer based) is configured, keep this at most
-	 * 31 bits, which is about 4 seconds on most systems, which gives
-	 * the watchdog a chance of catching timer interrupt hard lockups.
-	 */
-	if (IS_ENABLED(CONFIG_PPC_WATCHDOG))
-		set_dec(0x7fffffff);
-	else
-		set_dec(decrementer_max);
-
-	/* Conditionally hard-enable interrupts now that the DEC has been
-	 * bumped to its maximum value
-	 */
-	may_hard_irq_enable();
+	/* Conditionally hard-enable interrupts. */
+	if (may_hard_irq_enable()) {
+		/* Ensure a positive value is written to the decrementer, or
+		 * else some CPUs will continue to take decrementer exceptions.
+		 * When the PPC_WATCHDOG (decrementer based) is configured,
+		 * keep this at most 31 bits, which is about 4 seconds on most
+		 * systems, which gives the watchdog a chance of catching timer
+		 * interrupt hard lockups.
+		 */
+		if (IS_ENABLED(CONFIG_PPC_WATCHDOG))
+			set_dec(0x7fffffff);
+		else
+			set_dec(decrementer_max);
 
+		do_hard_irq_enable();
+	}
 
 #if defined(CONFIG_PPC32) && defined(CONFIG_PPC_PMAC)
 	if (atomic_read(&ppc_n_lost_interrupts) != 0)
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v1 4/4] powerpc/64s/interrupt: avoid saving CFAR in some asynchronous interrupts
  2021-08-16  7:29 [PATCH v1 0/4] powerpc/64s: interrupt speedups Nicholas Piggin
                   ` (2 preceding siblings ...)
  2021-08-16  7:29 ` [PATCH v1 3/4] powerpc/64s/interrupt: Don't enable MSR[EE] in irq handlers unless perf is in use Nicholas Piggin
@ 2021-08-16  7:29 ` Nicholas Piggin
  3 siblings, 0 replies; 11+ messages in thread
From: Nicholas Piggin @ 2021-08-16  7:29 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

Reading the CFAR register is quite costly (~20 cycles on POWER9). It is
a good idea to have for most synchronous interrupts, but for async ones
it is much less important.

Doorbell, external, and decrementer interrupts are the important
asynchronous ones. HV interrupts can't skip CFAR if KVM HV is possible,
because it might be a guest exit that requires CFAR preserved. But for
now the important pseries interrupts can avoid loading CFAR.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/exceptions-64s.S | 63 ++++++++++++++++++++++++++++
 1 file changed, 63 insertions(+)

diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 69a472c38f62..42badd7beaf0 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -111,6 +111,8 @@ name:
 #define IAREA		.L_IAREA_\name\()	/* PACA save area */
 #define IVIRT		.L_IVIRT_\name\()	/* Has virt mode entry point */
 #define IISIDE		.L_IISIDE_\name\()	/* Uses SRR0/1 not DAR/DSISR */
+#define ICFAR		.L_ICFAR_\name\()	/* Uses CFAR */
+#define ICFAR_IF_HVMODE	.L_ICFAR_IF_HVMODE_\name\() /* Uses CFAR if HV */
 #define IDAR		.L_IDAR_\name\()	/* Uses DAR (or SRR0) */
 #define IDSISR		.L_IDSISR_\name\()	/* Uses DSISR (or SRR1) */
 #define IBRANCH_TO_COMMON	.L_IBRANCH_TO_COMMON_\name\() /* ENTRY branch to common */
@@ -150,6 +152,12 @@ do_define_int n
 	.ifndef IISIDE
 		IISIDE=0
 	.endif
+	.ifndef ICFAR
+		ICFAR=1
+	.endif
+	.ifndef ICFAR_IF_HVMODE
+		ICFAR_IF_HVMODE=0
+	.endif
 	.ifndef IDAR
 		IDAR=0
 	.endif
@@ -287,9 +295,21 @@ BEGIN_FTR_SECTION
 END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 	HMT_MEDIUM
 	std	r10,IAREA+EX_R10(r13)		/* save r10 - r12 */
+	.if ICFAR
 BEGIN_FTR_SECTION
 	mfspr	r10,SPRN_CFAR
 END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
+	.elseif ICFAR_IF_HVMODE
+BEGIN_FTR_SECTION
+  BEGIN_FTR_SECTION_NESTED(69)
+	mfspr	r10,SPRN_CFAR
+  END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 69)
+FTR_SECTION_ELSE
+  BEGIN_FTR_SECTION_NESTED(69)
+	li	r10,0
+  END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 69)
+ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
+	.endif
 	.if \ool
 	.if !\virt
 	b	tramp_real_\name
@@ -305,9 +325,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 BEGIN_FTR_SECTION
 	std	r9,IAREA+EX_PPR(r13)
 END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
+	.if ICFAR || ICFAR_IF_HVMODE
 BEGIN_FTR_SECTION
 	std	r10,IAREA+EX_CFAR(r13)
 END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
+	.endif
 	INTERRUPT_TO_KERNEL
 	mfctr	r10
 	std	r10,IAREA+EX_CTR(r13)
@@ -559,7 +581,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 	.endif
 
 BEGIN_FTR_SECTION
+	.if ICFAR || ICFAR_IF_HVMODE
 	ld	r10,IAREA+EX_CFAR(r13)
+	.else
+	li	r10,0
+	.endif
 	std	r10,ORIG_GPR3(r1)
 END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 	ld	r10,IAREA+EX_CTR(r13)
@@ -1501,6 +1527,12 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
  *
  * If soft masked, the masked handler will note the pending interrupt for
  * replay, and clear MSR[EE] in the interrupted context.
+ *
+ * CFAR is not required because this is an asynchronous interrupt that in
+ * general won't have much bearing on the state of the CPU, with the possible
+ * exception of crash/debug IPIs, but those are generally moving to use SRESET
+ * IPIs. Unless this is an HV interrupt and KVM HV is possible, in which case
+ * it may be exiting the guest and need CFAR to be saved.
  */
 INT_DEFINE_BEGIN(hardware_interrupt)
 	IVEC=0x500
@@ -1508,6 +1540,10 @@ INT_DEFINE_BEGIN(hardware_interrupt)
 	IMASK=IRQS_DISABLED
 	IKVM_REAL=1
 	IKVM_VIRT=1
+	ICFAR=0
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+	ICFAR_IF_HVMODE=1
+#endif
 INT_DEFINE_END(hardware_interrupt)
 
 EXC_REAL_BEGIN(hardware_interrupt, 0x500, 0x100)
@@ -1726,6 +1762,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
  * If PPC_WATCHDOG is configured, the soft masked handler will actually set
  * things back up to run soft_nmi_interrupt as a regular interrupt handler
  * on the emergency stack.
+ *
+ * CFAR is not required because this is asynchronous (see hardware_interrupt).
+ * A watchdog interrupt may like to have CFAR, but usually the interesting
+ * branch is long gone by that point (e.g., infinite loop).
  */
 INT_DEFINE_BEGIN(decrementer)
 	IVEC=0x900
@@ -1733,6 +1773,7 @@ INT_DEFINE_BEGIN(decrementer)
 #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
 	IKVM_REAL=1
 #endif
+	ICFAR=0
 INT_DEFINE_END(decrementer)
 
 EXC_REAL_BEGIN(decrementer, 0x900, 0x80)
@@ -1808,6 +1849,8 @@ EXC_COMMON_BEGIN(hdecrementer_common)
  * If soft masked, the masked handler will note the pending interrupt for
  * replay, leaving MSR[EE] enabled in the interrupted context because the
  * doorbells are edge triggered.
+ *
+ * CFAR is not required, similarly to hardware_interrupt.
  */
 INT_DEFINE_BEGIN(doorbell_super)
 	IVEC=0xa00
@@ -1815,6 +1858,7 @@ INT_DEFINE_BEGIN(doorbell_super)
 #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
 	IKVM_REAL=1
 #endif
+	ICFAR=0
 INT_DEFINE_END(doorbell_super)
 
 EXC_REAL_BEGIN(doorbell_super, 0xa00, 0x100)
@@ -1866,6 +1910,7 @@ INT_DEFINE_BEGIN(system_call)
 	IVEC=0xc00
 	IKVM_REAL=1
 	IKVM_VIRT=1
+	ICFAR=0
 INT_DEFINE_END(system_call)
 
 .macro SYSTEM_CALL virt
@@ -2164,6 +2209,11 @@ EXC_COMMON_BEGIN(hmi_exception_common)
  * Interrupt 0xe80 - Directed Hypervisor Doorbell Interrupt.
  * This is an asynchronous interrupt in response to a msgsnd doorbell.
  * Similar to the 0xa00 doorbell but for host rather than guest.
+ *
+ * CFAR is not required (similar to doorbell_interrupt), unless KVM HV
+ * is enabled, in which case it may be a guest exit. Most PowerNV kernels
+ * include KVM support so it would be nice if this could be dynamically
+ * patched out if KVM was not currently running any guests.
  */
 INT_DEFINE_BEGIN(h_doorbell)
 	IVEC=0xe80
@@ -2171,6 +2221,9 @@ INT_DEFINE_BEGIN(h_doorbell)
 	IMASK=IRQS_DISABLED
 	IKVM_REAL=1
 	IKVM_VIRT=1
+#ifndef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+	ICFAR=0
+#endif
 INT_DEFINE_END(h_doorbell)
 
 EXC_REAL_BEGIN(h_doorbell, 0xe80, 0x20)
@@ -2194,6 +2247,9 @@ EXC_COMMON_BEGIN(h_doorbell_common)
  * Interrupt 0xea0 - Hypervisor Virtualization Interrupt.
  * This is an asynchronous interrupt in response to an "external exception".
  * Similar to 0x500 but for host only.
+ *
+ * Like h_doorbell, CFAR is only required for KVM HV because this can be
+ * a guest exit.
  */
 INT_DEFINE_BEGIN(h_virt_irq)
 	IVEC=0xea0
@@ -2201,6 +2257,9 @@ INT_DEFINE_BEGIN(h_virt_irq)
 	IMASK=IRQS_DISABLED
 	IKVM_REAL=1
 	IKVM_VIRT=1
+#ifndef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+	ICFAR=0
+#endif
 INT_DEFINE_END(h_virt_irq)
 
 EXC_REAL_BEGIN(h_virt_irq, 0xea0, 0x20)
@@ -2237,6 +2296,8 @@ EXC_VIRT_NONE(0x4ee0, 0x20)
  *
  * If soft masked, the masked handler will note the pending interrupt for
  * replay, and clear MSR[EE] in the interrupted context.
+ *
+ * CFAR is not used by perf interrupts so not required.
  */
 INT_DEFINE_BEGIN(performance_monitor)
 	IVEC=0xf00
@@ -2244,6 +2305,7 @@ INT_DEFINE_BEGIN(performance_monitor)
 #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
 	IKVM_REAL=1
 #endif
+	ICFAR=0
 INT_DEFINE_END(performance_monitor)
 
 EXC_REAL_BEGIN(performance_monitor, 0xf00, 0x20)
@@ -2668,6 +2730,7 @@ EXC_VIRT_NONE(0x5800, 0x100)
 INT_DEFINE_BEGIN(soft_nmi)
 	IVEC=0x900
 	ISTACK=0
+	ICFAR=0
 INT_DEFINE_END(soft_nmi)
 
 /*
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 2/4] powerpc/64s/perf: add power_pmu_running to query whether perf is being used
  2021-08-16  7:29 ` [PATCH v1 2/4] powerpc/64s/perf: add power_pmu_running to query whether perf is being used Nicholas Piggin
@ 2021-08-16 18:19     ` kernel test robot
  2021-08-17 13:06   ` Madhavan Srinivasan
  2021-08-17 13:42   ` Michael Ellerman
  2 siblings, 0 replies; 11+ messages in thread
From: kernel test robot @ 2021-08-16 18:19 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev
  Cc: Athira Rajeev, kbuild-all, Madhavan Srinivasan, Nicholas Piggin

[-- Attachment #1: Type: text/plain, Size: 2185 bytes --]

Hi Nicholas,

I love your patch! Yet something to improve:

[auto build test ERROR on powerpc/next]
[also build test ERROR on v5.14-rc6 next-20210816]
[cannot apply to scottwood/next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Nicholas-Piggin/powerpc-64s-interrupt-speedups/20210816-153424
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-wii_defconfig (attached as .config)
compiler: powerpc-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/98b8a35aa718c93ace9df66df6274fe392633f80
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Nicholas-Piggin/powerpc-64s-interrupt-speedups/20210816-153424
        git checkout 98b8a35aa718c93ace9df66df6274fe392633f80
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> arch/powerpc/perf/core-book3s.c:2392:6: error: no previous prototype for 'power_pmu_running' [-Werror=missing-prototypes]
    2392 | bool power_pmu_running(void)
         |      ^~~~~~~~~~~~~~~~~
   cc1: all warnings being treated as errors


vim +/power_pmu_running +2392 arch/powerpc/perf/core-book3s.c

  2391	
> 2392	bool power_pmu_running(void)
  2393	{
  2394		struct cpu_hw_events *cpuhw;
  2395	
  2396		/* Could this simply test local_paca->pmcregs_in_use? */
  2397	
  2398		if (!ppmu)
  2399			return false;
  2400	
  2401		cpuhw = this_cpu_ptr(&cpu_hw_events);
  2402		return cpuhw->n_events;
  2403	}
  2404	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 16544 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 2/4] powerpc/64s/perf: add power_pmu_running to query whether perf is being used
@ 2021-08-16 18:19     ` kernel test robot
  0 siblings, 0 replies; 11+ messages in thread
From: kernel test robot @ 2021-08-16 18:19 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 2243 bytes --]

Hi Nicholas,

I love your patch! Yet something to improve:

[auto build test ERROR on powerpc/next]
[also build test ERROR on v5.14-rc6 next-20210816]
[cannot apply to scottwood/next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Nicholas-Piggin/powerpc-64s-interrupt-speedups/20210816-153424
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-wii_defconfig (attached as .config)
compiler: powerpc-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/98b8a35aa718c93ace9df66df6274fe392633f80
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Nicholas-Piggin/powerpc-64s-interrupt-speedups/20210816-153424
        git checkout 98b8a35aa718c93ace9df66df6274fe392633f80
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> arch/powerpc/perf/core-book3s.c:2392:6: error: no previous prototype for 'power_pmu_running' [-Werror=missing-prototypes]
    2392 | bool power_pmu_running(void)
         |      ^~~~~~~~~~~~~~~~~
   cc1: all warnings being treated as errors


vim +/power_pmu_running +2392 arch/powerpc/perf/core-book3s.c

  2391	
> 2392	bool power_pmu_running(void)
  2393	{
  2394		struct cpu_hw_events *cpuhw;
  2395	
  2396		/* Could this simply test local_paca->pmcregs_in_use? */
  2397	
  2398		if (!ppmu)
  2399			return false;
  2400	
  2401		cpuhw = this_cpu_ptr(&cpu_hw_events);
  2402		return cpuhw->n_events;
  2403	}
  2404	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 16544 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 2/4] powerpc/64s/perf: add power_pmu_running to query whether perf is being used
  2021-08-16  7:29 ` [PATCH v1 2/4] powerpc/64s/perf: add power_pmu_running to query whether perf is being used Nicholas Piggin
  2021-08-16 18:19     ` kernel test robot
@ 2021-08-17 13:06   ` Madhavan Srinivasan
  2021-08-18 11:41     ` Nicholas Piggin
  2021-08-17 13:42   ` Michael Ellerman
  2 siblings, 1 reply; 11+ messages in thread
From: Madhavan Srinivasan @ 2021-08-17 13:06 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Athira Rajeev


On 8/16/21 12:59 PM, Nicholas Piggin wrote:
> Interrupt handling code would like to know whether perf is enabled, to
> know whether it should enable MSR[EE] to improve PMI coverage.
>
> Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>   arch/powerpc/include/asm/hw_irq.h |  2 ++
>   arch/powerpc/perf/core-book3s.c   | 13 +++++++++++++
>   2 files changed, 15 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> index 21cc571ea9c2..2d5c0d3ccbb6 100644
> --- a/arch/powerpc/include/asm/hw_irq.h
> +++ b/arch/powerpc/include/asm/hw_irq.h
> @@ -306,6 +306,8 @@ static inline bool lazy_irq_pending_nocheck(void)
>   	return __lazy_irq_pending(local_paca->irq_happened);
>   }
>   
> +bool power_pmu_running(void);
> +
>   /*
>    * This is called by asynchronous interrupts to conditionally
>    * re-enable hard interrupts after having cleared the source
> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
> index bb0ee716de91..76114a9afb2b 100644
> --- a/arch/powerpc/perf/core-book3s.c
> +++ b/arch/powerpc/perf/core-book3s.c
> @@ -2380,6 +2380,19 @@ static void perf_event_interrupt(struct pt_regs *regs)
>   	perf_sample_event_took(sched_clock() - start_clock);
>   }
>   
> +bool power_pmu_running(void)
> +{
> +	struct cpu_hw_events *cpuhw;
> +
> +	/* Could this simply test local_paca->pmcregs_in_use? */
> +
> +	if (!ppmu)
> +		return false;


This covers only when perf platform driver is not registered,
but we should also check for MMCR0[32], since pmu sprs can be
accessed via sysfs.

Maddy


> +
> +	cpuhw = this_cpu_ptr(&cpu_hw_events);
> +	return cpuhw->n_events;
> +}
> +
>   static int power_pmu_prepare_cpu(unsigned int cpu)
>   {
>   	struct cpu_hw_events *cpuhw = &per_cpu(cpu_hw_events, cpu);

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 2/4] powerpc/64s/perf: add power_pmu_running to query whether perf is being used
  2021-08-16  7:29 ` [PATCH v1 2/4] powerpc/64s/perf: add power_pmu_running to query whether perf is being used Nicholas Piggin
  2021-08-16 18:19     ` kernel test robot
  2021-08-17 13:06   ` Madhavan Srinivasan
@ 2021-08-17 13:42   ` Michael Ellerman
  2 siblings, 0 replies; 11+ messages in thread
From: Michael Ellerman @ 2021-08-17 13:42 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev
  Cc: Athira Rajeev, Madhavan Srinivasan, Nicholas Piggin

Nicholas Piggin <npiggin@gmail.com> writes:

> Interrupt handling code would like to know whether perf is enabled, to
> know whether it should enable MSR[EE] to improve PMI coverage.
>
> Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  arch/powerpc/include/asm/hw_irq.h |  2 ++
>  arch/powerpc/perf/core-book3s.c   | 13 +++++++++++++
>  2 files changed, 15 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> index 21cc571ea9c2..2d5c0d3ccbb6 100644
> --- a/arch/powerpc/include/asm/hw_irq.h
> +++ b/arch/powerpc/include/asm/hw_irq.h
> @@ -306,6 +306,8 @@ static inline bool lazy_irq_pending_nocheck(void)
>  	return __lazy_irq_pending(local_paca->irq_happened);
>  }
>  
> +bool power_pmu_running(void);
> +
>  /*
>   * This is called by asynchronous interrupts to conditionally
>   * re-enable hard interrupts after having cleared the source
> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
> index bb0ee716de91..76114a9afb2b 100644
> --- a/arch/powerpc/perf/core-book3s.c
> +++ b/arch/powerpc/perf/core-book3s.c
> @@ -2380,6 +2380,19 @@ static void perf_event_interrupt(struct pt_regs *regs)
>  	perf_sample_event_took(sched_clock() - start_clock);
>  }
>  
> +bool power_pmu_running(void)
> +{
> +	struct cpu_hw_events *cpuhw;
> +
> +	/* Could this simply test local_paca->pmcregs_in_use? */

I think it could, except that it's under an ifdef:

#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
	u8 pmcregs_in_use;		/* pseries puts this in lppaca */
#endif

cheers

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 2/4] powerpc/64s/perf: add power_pmu_running to query whether perf is being used
  2021-08-17 13:06   ` Madhavan Srinivasan
@ 2021-08-18 11:41     ` Nicholas Piggin
  2021-08-18 12:14       ` Athira Rajeev
  0 siblings, 1 reply; 11+ messages in thread
From: Nicholas Piggin @ 2021-08-18 11:41 UTC (permalink / raw)
  To: linuxppc-dev, Madhavan Srinivasan; +Cc: Athira Rajeev

Excerpts from Madhavan Srinivasan's message of August 17, 2021 11:06 pm:
> 
> On 8/16/21 12:59 PM, Nicholas Piggin wrote:
>> Interrupt handling code would like to know whether perf is enabled, to
>> know whether it should enable MSR[EE] to improve PMI coverage.
>>
>> Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
>> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>>   arch/powerpc/include/asm/hw_irq.h |  2 ++
>>   arch/powerpc/perf/core-book3s.c   | 13 +++++++++++++
>>   2 files changed, 15 insertions(+)
>>
>> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
>> index 21cc571ea9c2..2d5c0d3ccbb6 100644
>> --- a/arch/powerpc/include/asm/hw_irq.h
>> +++ b/arch/powerpc/include/asm/hw_irq.h
>> @@ -306,6 +306,8 @@ static inline bool lazy_irq_pending_nocheck(void)
>>   	return __lazy_irq_pending(local_paca->irq_happened);
>>   }
>>   
>> +bool power_pmu_running(void);
>> +
>>   /*
>>    * This is called by asynchronous interrupts to conditionally
>>    * re-enable hard interrupts after having cleared the source
>> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
>> index bb0ee716de91..76114a9afb2b 100644
>> --- a/arch/powerpc/perf/core-book3s.c
>> +++ b/arch/powerpc/perf/core-book3s.c
>> @@ -2380,6 +2380,19 @@ static void perf_event_interrupt(struct pt_regs *regs)
>>   	perf_sample_event_took(sched_clock() - start_clock);
>>   }
>>   
>> +bool power_pmu_running(void)
>> +{
>> +	struct cpu_hw_events *cpuhw;
>> +
>> +	/* Could this simply test local_paca->pmcregs_in_use? */
>> +
>> +	if (!ppmu)
>> +		return false;
> 
> 
> This covers only when perf platform driver is not registered,
> but we should also check for MMCR0[32], since pmu sprs can be
> accessed via sysfs.

In that case do they actually do anything with the PMI? I don't think it 
should matter hopefully.

But I do think a lot of this stuff could be cleaned up. We have 
pmcs_enabled and ppc_enable_pmcs() in sysfs.c, ppc_set_pmu_inuse(), 
ppc_md.enable_pmcs(), reserve_pmc_hardware(), etc and different users 
call different things. We don't consistently disable either, e.g., we 
never disable the H_PERFMON facility after we stop using perf even 
though it says that slows down partition switch.

I started to have a look at sorting it out but it looks like a big
job so would take a bit of time if we want to do it.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 2/4] powerpc/64s/perf: add power_pmu_running to query whether perf is being used
  2021-08-18 11:41     ` Nicholas Piggin
@ 2021-08-18 12:14       ` Athira Rajeev
  0 siblings, 0 replies; 11+ messages in thread
From: Athira Rajeev @ 2021-08-18 12:14 UTC (permalink / raw)
  To: Nicholas Piggin; +Cc: Madhavan Srinivasan, linuxppc-dev



> On 18-Aug-2021, at 5:11 PM, Nicholas Piggin <npiggin@gmail.com> wrote:
> 
> Excerpts from Madhavan Srinivasan's message of August 17, 2021 11:06 pm:
>> 
>> On 8/16/21 12:59 PM, Nicholas Piggin wrote:
>>> Interrupt handling code would like to know whether perf is enabled, to
>>> know whether it should enable MSR[EE] to improve PMI coverage.
>>> 
>>> Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
>>> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>> ---
>>>  arch/powerpc/include/asm/hw_irq.h |  2 ++
>>>  arch/powerpc/perf/core-book3s.c   | 13 +++++++++++++
>>>  2 files changed, 15 insertions(+)
>>> 
>>> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
>>> index 21cc571ea9c2..2d5c0d3ccbb6 100644
>>> --- a/arch/powerpc/include/asm/hw_irq.h
>>> +++ b/arch/powerpc/include/asm/hw_irq.h
>>> @@ -306,6 +306,8 @@ static inline bool lazy_irq_pending_nocheck(void)
>>>  	return __lazy_irq_pending(local_paca->irq_happened);
>>>  }
>>> 
>>> +bool power_pmu_running(void);
>>> +
>>>  /*
>>>   * This is called by asynchronous interrupts to conditionally
>>>   * re-enable hard interrupts after having cleared the source
>>> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
>>> index bb0ee716de91..76114a9afb2b 100644
>>> --- a/arch/powerpc/perf/core-book3s.c
>>> +++ b/arch/powerpc/perf/core-book3s.c
>>> @@ -2380,6 +2380,19 @@ static void perf_event_interrupt(struct pt_regs *regs)
>>>  	perf_sample_event_took(sched_clock() - start_clock);
>>>  }
>>> 
>>> +bool power_pmu_running(void)
>>> +{
>>> +	struct cpu_hw_events *cpuhw;
>>> +
>>> +	/* Could this simply test local_paca->pmcregs_in_use? */
>>> +
>>> +	if (!ppmu)
>>> +		return false;
>> 
>> 
>> This covers only when perf platform driver is not registered,
>> but we should also check for MMCR0[32], since pmu sprs can be
>> accessed via sysfs.
> 
> In that case do they actually do anything with the PMI? I don't think it 
> should matter hopefully.
> 
> But I do think a lot of this stuff could be cleaned up. We have 
> pmcs_enabled and ppc_enable_pmcs() in sysfs.c, ppc_set_pmu_inuse(), 
> ppc_md.enable_pmcs(), reserve_pmc_hardware(), etc and different users 
> call different things. We don't consistently disable either, e.g., we 
> never disable the H_PERFMON facility after we stop using perf even 
> though it says that slows down partition switch.

Hi Nick,

I have started looking at understanding the code path and working on it to get the code cleaned up.
I will work on posting the patch set for clean up.

Thanks
Athira Rajeev
> 
> I started to have a look at sorting it out but it looks like a big
> job so would take a bit of time if we want to do it.
> 
> Thanks,
> Nick


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-08-18 12:15 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-16  7:29 [PATCH v1 0/4] powerpc/64s: interrupt speedups Nicholas Piggin
2021-08-16  7:29 ` [PATCH v1 1/4] powerpc/64: handle MSR EE and RI in interrupt entry wrapper Nicholas Piggin
2021-08-16  7:29 ` [PATCH v1 2/4] powerpc/64s/perf: add power_pmu_running to query whether perf is being used Nicholas Piggin
2021-08-16 18:19   ` kernel test robot
2021-08-16 18:19     ` kernel test robot
2021-08-17 13:06   ` Madhavan Srinivasan
2021-08-18 11:41     ` Nicholas Piggin
2021-08-18 12:14       ` Athira Rajeev
2021-08-17 13:42   ` Michael Ellerman
2021-08-16  7:29 ` [PATCH v1 3/4] powerpc/64s/interrupt: Don't enable MSR[EE] in irq handlers unless perf is in use Nicholas Piggin
2021-08-16  7:29 ` [PATCH v1 4/4] powerpc/64s/interrupt: avoid saving CFAR in some asynchronous interrupts Nicholas Piggin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.