* [PATCH v6 00/39] powerpc: interrupt wrappers
@ 2021-01-15 16:49 Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 01/39] KVM: PPC: Book3S HV: Context tracking exit guest context before enabling irqs Nicholas Piggin
` (38 more replies)
0 siblings, 39 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
This adds interrupt handler wrapper functions, similar to the
generic / x86 code, and moves several common operations into them
from either asm or open coded in the individual handlers.
This series is based on upstream plus "powerpc/64s: fix scv entry
fallback flush vs interrupt".
Since v1:
- Fixed a couple of compile issues
- Fixed perf weirdness (sometimes NMI, sometimes not)
- Also move irq_enter/exit into wrappers
Since v2:
- Rebased upstream
- Took code in patch 3 from Christophe
- Fixed some compile errors from 0day
Since v3:
- Rebased
- Split Christophe's 32s DABR patch into its own patch
- Fixed missing asm from 32s on patch 3 noticed by Christophe.
- Moved changes around, split out one more patch (patch 9) to make
changes more logical and atomic.
- Add comments explaining _RAW handlers (SLB, HPTE) interrupts better
Since v4:
- Rebased (on top of scv fallback flush fix)
- Rearranged a few changes into different patches from Christophe,
e.g., the ___do_page_fault change from patch 2 to 10. I didn't
do everything (e.g., splitting to update __hash_page to drop the
msr argument before the bulk of patch 2 seemed like churn without
much improvement), and also other things like removing the new
___do_page_fault variant if we can change hash fault context tracking
I didn't get time to completely investigate and implement. I think
this shouldn't be a showstopper though we can make more improvements
as we go.
Since v5:
- Lots of good review suggestions from Christophe, see v5 email threads.
- Major change being do_break is left in asm and selected early as an
alternate interrupt handler now, which is a smaller step and matches
other subarchs better.
- Rearranged patches, split, moved things, bug fixes, etc.
- Converted a few more missed exception handlers for debug and ras
stuff.
- Found a few relatively minor bugs, comment updates, and tidy ups that
don't really have to be part of this series but might as well be.
Thanks,
Nick
Christophe Leroy (1):
powerpc/32s: move DABR match out of handle_page_fault
Nicholas Piggin (38):
KVM: PPC: Book3S HV: Context tracking exit guest context before
enabling irqs
powerpc/64s: move DABR match out of handle_page_fault
powerpc/64s: move the hash fault handling logic to C
powerpc: remove arguments from fault handler functions
powerpc: do_break get registers from regs
powerpc: bad_page_fault get registers from regs
powerpc: rearrange do_page_fault error case to be inside
exception_enter
powerpc/64s: move bad_page_fault handling to C
powerpc/64s: split do_hash_fault
powerpc/mm: Remove stale do_page_fault comment referring to SLB faults
powerpc/64s: slb comment update
powerpc/traps: add NOKPROBE_SYMBOL for sreset and mce
powerpc/perf: move perf irq/nmi handling details into traps.c
powerpc/time: move timer_broadcast_interrupt prototype to asm/time.h
powerpc: add and use unknown_async_exception
powerpc/fsl_booke/32: CacheLockingException remove args
powerpc: DebugException remove args
powerpc/cell: tidy up pervasive declarations
powerpc: introduce die_mce
powerpc/mce: ensure machine check handler always tests RI
powerpc: improve handling of unrecoverable system reset
powerpc: interrupt handler wrapper functions
powerpc: add interrupt wrapper entry / exit stub functions
powerpc: convert interrupt handlers to use wrappers
powerpc: add interrupt_cond_local_irq_enable helper
powerpc/64: context tracking remove _TIF_NOHZ
powerpc/64s/hash: improve context tracking of hash faults
powerpc/64: context tracking move to interrupt wrappers
powerpc/64: add context tracking to asynchronous interrupts
powerpc: handle irq_enter/irq_exit in interrupt handler wrappers
powerpc/64s: move context tracking exit to interrupt exit path
powerpc/64s: reconcile interrupts in C
powerpc/64: move account_stolen_time into its own function
powerpc/64: entry cpu time accounting in C
powerpc: move NMI entry/exit code into wrapper
powerpc/64s: move NMI soft-mask handling to C
powerpc/64s: runlatch interrupt handling in C
powerpc/64s: power4 nap fixup in C
arch/powerpc/Kconfig | 1 -
arch/powerpc/include/asm/asm-prototypes.h | 29 --
arch/powerpc/include/asm/bug.h | 9 +-
arch/powerpc/include/asm/cputime.h | 14 +
arch/powerpc/include/asm/debug.h | 4 -
arch/powerpc/include/asm/hw_irq.h | 9 -
arch/powerpc/include/asm/interrupt.h | 434 +++++++++++++++++++++
arch/powerpc/include/asm/ppc_asm.h | 24 --
arch/powerpc/include/asm/processor.h | 1 +
arch/powerpc/include/asm/thread_info.h | 10 +-
arch/powerpc/include/asm/time.h | 2 +
arch/powerpc/kernel/dbell.c | 9 +-
arch/powerpc/kernel/entry_32.S | 25 +-
arch/powerpc/kernel/exceptions-64e.S | 8 +-
arch/powerpc/kernel/exceptions-64s.S | 310 ++-------------
arch/powerpc/kernel/head_40x.S | 11 +-
arch/powerpc/kernel/head_8xx.S | 11 +-
arch/powerpc/kernel/head_book3s_32.S | 14 +-
arch/powerpc/kernel/head_booke.h | 6 +-
arch/powerpc/kernel/head_fsl_booke.S | 6 +-
arch/powerpc/kernel/idle_book3s.S | 4 +
arch/powerpc/kernel/irq.c | 7 +-
arch/powerpc/kernel/mce.c | 16 +-
arch/powerpc/kernel/process.c | 8 +-
arch/powerpc/kernel/ptrace/ptrace.c | 4 -
arch/powerpc/kernel/signal.c | 4 -
arch/powerpc/kernel/syscall_64.c | 30 +-
arch/powerpc/kernel/tau_6xx.c | 5 +-
arch/powerpc/kernel/time.c | 7 +-
arch/powerpc/kernel/traps.c | 263 +++++++------
arch/powerpc/kernel/watchdog.c | 15 +-
arch/powerpc/kvm/book3s_hv.c | 7 +-
arch/powerpc/kvm/book3s_hv_builtin.c | 1 +
arch/powerpc/kvm/booke.c | 1 +
arch/powerpc/mm/book3s64/hash_utils.c | 96 +++--
arch/powerpc/mm/book3s64/slb.c | 40 +-
arch/powerpc/mm/fault.c | 80 ++--
arch/powerpc/perf/core-book3s.c | 35 +-
arch/powerpc/perf/core-fsl-emb.c | 25 --
arch/powerpc/platforms/8xx/machine_check.c | 2 +-
arch/powerpc/platforms/cell/pervasive.c | 1 +
arch/powerpc/platforms/cell/pervasive.h | 3 -
arch/powerpc/platforms/cell/ras.c | 6 +-
arch/powerpc/platforms/cell/ras.h | 9 +-
arch/powerpc/platforms/powernv/idle.c | 1 +
arch/powerpc/platforms/powernv/opal.c | 2 +-
arch/powerpc/platforms/pseries/ras.c | 2 +-
47 files changed, 867 insertions(+), 744 deletions(-)
create mode 100644 arch/powerpc/include/asm/interrupt.h
--
2.23.0
^ permalink raw reply [flat|nested] 56+ messages in thread
* [PATCH v6 01/39] KVM: PPC: Book3S HV: Context tracking exit guest context before enabling irqs
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-16 10:38 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 02/39] powerpc/32s: move DABR match out of handle_page_fault Nicholas Piggin
` (37 subsequent siblings)
38 siblings, 1 reply; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: kvm-ppc, Nicholas Piggin
Interrupts that occur in kernel mode expect that context tracking
is set to kernel. Enabling local irqs before context tracking
switches from guest to host means interrupts can come in and trigger
warnings about wrong context, and possibly worse.
Cc: kvm-ppc@vger.kernel.org
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 6f612d240392..d348e77cee20 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3407,8 +3407,9 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
kvmppc_set_host_core(pcpu);
+ guest_exit_irqoff();
+
local_irq_enable();
- guest_exit();
/* Let secondaries go back to the offline loop */
for (i = 0; i < controlled_threads; ++i) {
@@ -4217,8 +4218,9 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
kvmppc_set_host_core(pcpu);
+ guest_exit_irqoff();
+
local_irq_enable();
- guest_exit();
cpumask_clear_cpu(pcpu, &kvm->arch.cpu_in_guest);
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 02/39] powerpc/32s: move DABR match out of handle_page_fault
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 01/39] KVM: PPC: Book3S HV: Context tracking exit guest context before enabling irqs Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 03/39] powerpc/64s: " Nicholas Piggin
` (36 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
From: Christophe Leroy <christophe.leroy@csgroup.eu>
handle_page_fault() has some code dedicated to book3s/32 to
call do_break() when the DSI is a DABR match.
On other platforms, do_break() is handled separately.
Do the same for book3s/32, do it earlier in the process of DSI.
This change also avoid doing the test on ISI.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
arch/powerpc/kernel/entry_32.S | 15 ---------------
arch/powerpc/kernel/head_book3s_32.S | 3 +++
2 files changed, 3 insertions(+), 15 deletions(-)
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 1c9b0ccc2172..238eacfda7b0 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -670,10 +670,6 @@ ppc_swapcontext:
.globl handle_page_fault
handle_page_fault:
addi r3,r1,STACK_FRAME_OVERHEAD
-#ifdef CONFIG_PPC_BOOK3S_32
- andis. r0,r5,DSISR_DABRMATCH@h
- bne- handle_dabr_fault
-#endif
bl do_page_fault
cmpwi r3,0
beq+ ret_from_except
@@ -687,17 +683,6 @@ handle_page_fault:
bl __bad_page_fault
b ret_from_except_full
-#ifdef CONFIG_PPC_BOOK3S_32
- /* We have a data breakpoint exception - handle it */
-handle_dabr_fault:
- SAVE_NVGPRS(r1)
- lwz r0,_TRAP(r1)
- clrrwi r0,r0,1
- stw r0,_TRAP(r1)
- bl do_break
- b ret_from_except_full
-#endif
-
/*
* This routine switches between two different tasks. The process
* state of one is saved on its kernel stack. Then the state
diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
index 349bf3f0c3af..fc9a12768a14 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -680,7 +680,10 @@ handle_page_fault_tramp_1:
lwz r5, _DSISR(r11)
/* fall through */
handle_page_fault_tramp_2:
+ andis. r0, r5, DSISR_DABRMATCH@h
+ bne- 1f
EXC_XFER_LITE(0x300, handle_page_fault)
+1: EXC_XFER_STD(0x300, do_break)
#ifdef CONFIG_VMAP_STACK
.macro save_regs_thread thread
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 03/39] powerpc/64s: move DABR match out of handle_page_fault
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 01/39] KVM: PPC: Book3S HV: Context tracking exit guest context before enabling irqs Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 02/39] powerpc/32s: move DABR match out of handle_page_fault Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 04/39] powerpc/64s: move the hash fault handling logic to C Nicholas Piggin
` (35 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
Similar to the 32/s change, move the test and call to the do_break
handler to the DSI.
Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/exceptions-64s.S | 34 +++++++++++++---------------
1 file changed, 16 insertions(+), 18 deletions(-)
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 6e53f7638737..a6333b986a57 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1439,6 +1439,8 @@ EXC_COMMON_BEGIN(data_access_common)
GEN_COMMON data_access
ld r4,_DAR(r1)
ld r5,_DSISR(r1)
+ andis. r0,r5,DSISR_DABRMATCH@h
+ bne- 1f
BEGIN_MMU_FTR_SECTION
ld r6,_MSR(r1)
li r3,0x300
@@ -1447,6 +1449,18 @@ MMU_FTR_SECTION_ELSE
b handle_page_fault
ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+1: /* We have a data breakpoint exception - handle it */
+ ld r4,_DAR(r1)
+ ld r5,_DSISR(r1)
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ bl do_break
+ /*
+ * do_break() may have changed the NV GPRS while handling a breakpoint.
+ * If so, we need to restore them with their updated values.
+ */
+ REST_NVGPRS(r1)
+ b interrupt_return
+
GEN_KVM data_access
@@ -3228,7 +3242,7 @@ disable_machine_check:
.balign IFETCH_ALIGN_BYTES
do_hash_page:
#ifdef CONFIG_PPC_BOOK3S_64
- lis r0,(DSISR_BAD_FAULT_64S | DSISR_DABRMATCH | DSISR_KEYFAULT)@h
+ lis r0,(DSISR_BAD_FAULT_64S | DSISR_KEYFAULT)@h
ori r0,r0,DSISR_BAD_FAULT_64S@l
and. r0,r5,r0 /* weird error? */
bne- handle_page_fault /* if not, try to insert a HPTE */
@@ -3262,15 +3276,13 @@ do_hash_page:
/* Error */
blt- 13f
- /* Reload DAR/DSISR into r4/r5 for the DABR check below */
+ /* Reload DAR/DSISR into r4/r5 for handle_page_fault */
ld r4,_DAR(r1)
ld r5,_DSISR(r1)
#endif /* CONFIG_PPC_BOOK3S_64 */
/* Here we have a page fault that hash_page can't handle. */
handle_page_fault:
-11: andis. r0,r5,DSISR_DABRMATCH@h
- bne- handle_dabr_fault
addi r3,r1,STACK_FRAME_OVERHEAD
bl do_page_fault
cmpdi r3,0
@@ -3281,20 +3293,6 @@ handle_page_fault:
bl __bad_page_fault
b interrupt_return
-/* We have a data breakpoint exception - handle it */
-handle_dabr_fault:
- ld r4,_DAR(r1)
- ld r5,_DSISR(r1)
- addi r3,r1,STACK_FRAME_OVERHEAD
- bl do_break
- /*
- * do_break() may have changed the NV GPRS while handling a breakpoint.
- * If so, we need to restore them with their updated values.
- */
- REST_NVGPRS(r1)
- b interrupt_return
-
-
#ifdef CONFIG_PPC_BOOK3S_64
/* We have a page fault that hash_page could handle but HV refused
* the PTE insertion
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 04/39] powerpc/64s: move the hash fault handling logic to C
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (2 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 03/39] powerpc/64s: " Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 05/39] powerpc: remove arguments from fault handler functions Nicholas Piggin
` (34 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
The fault handling still has some complex logic particularly around
hash table handling, in asm. Implement most of this in C.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 1 +
arch/powerpc/kernel/exceptions-64s.S | 127 ++++--------------
arch/powerpc/mm/book3s64/hash_utils.c | 77 +++++++----
3 files changed, 78 insertions(+), 127 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index 066b1d34c7bc..60a669379aa0 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -454,6 +454,7 @@ static inline unsigned long hpt_hash(unsigned long vpn,
#define HPTE_NOHPTE_UPDATE 0x2
#define HPTE_USE_KERNEL_KEY 0x4
+int do_hash_fault(struct pt_regs *regs, unsigned long ea, unsigned long dsisr);
extern int __hash_page_4K(unsigned long ea, unsigned long access,
unsigned long vsid, pte_t *ptep, unsigned long trap,
unsigned long flags, int ssize, int subpage_prot);
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index a6333b986a57..07aba8af99d3 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1401,14 +1401,15 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
*
* Handling:
* - Hash MMU
- * Go to do_hash_page first to see if the HPT can be filled from an entry in
- * the Linux page table. Hash faults can hit in kernel mode in a fairly
+ * Go to do_hash_fault, which attempts to fill the HPT from an entry in the
+ * Linux page table. Hash faults can hit in kernel mode in a fairly
* arbitrary state (e.g., interrupts disabled, locks held) when accessing
* "non-bolted" regions, e.g., vmalloc space. However these should always be
- * backed by Linux page tables.
+ * backed by Linux page table entries.
*
- * If none is found, do a Linux page fault. Linux page faults can happen in
- * kernel mode due to user copy operations of course.
+ * If no entry is found the Linux page fault handler is invoked (by
+ * do_hash_fault). Linux page faults can happen in kernel mode due to user
+ * copy operations of course.
*
* KVM: The KVM HDSI handler may perform a load with MSR[DR]=1 in guest
* MMU context, which may cause a DSI in the host, which must go to the
@@ -1439,27 +1440,29 @@ EXC_COMMON_BEGIN(data_access_common)
GEN_COMMON data_access
ld r4,_DAR(r1)
ld r5,_DSISR(r1)
+ addi r3,r1,STACK_FRAME_OVERHEAD
andis. r0,r5,DSISR_DABRMATCH@h
bne- 1f
BEGIN_MMU_FTR_SECTION
- ld r6,_MSR(r1)
- li r3,0x300
- b do_hash_page /* Try to handle as hpte fault */
+ bl do_hash_fault
MMU_FTR_SECTION_ELSE
- b handle_page_fault
+ bl do_page_fault
ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+ cmpdi r3,0
+ beq+ interrupt_return
+ mr r5,r3
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ ld r4,_DAR(r1)
+ bl __bad_page_fault
+ b interrupt_return
-1: /* We have a data breakpoint exception - handle it */
- ld r4,_DAR(r1)
- ld r5,_DSISR(r1)
- addi r3,r1,STACK_FRAME_OVERHEAD
- bl do_break
+1: bl do_break
/*
* do_break() may have changed the NV GPRS while handling a breakpoint.
* If so, we need to restore them with their updated values.
*/
REST_NVGPRS(r1)
- b interrupt_return
+ b interrupt_return
GEN_KVM data_access
@@ -1554,13 +1557,19 @@ EXC_COMMON_BEGIN(instruction_access_common)
GEN_COMMON instruction_access
ld r4,_DAR(r1)
ld r5,_DSISR(r1)
+ addi r3,r1,STACK_FRAME_OVERHEAD
BEGIN_MMU_FTR_SECTION
- ld r6,_MSR(r1)
- li r3,0x400
- b do_hash_page /* Try to handle as hpte fault */
+ bl do_hash_fault
MMU_FTR_SECTION_ELSE
- b handle_page_fault
+ bl do_page_fault
ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+ cmpdi r3,0
+ beq+ interrupt_return
+ mr r5,r3
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ ld r4,_DAR(r1)
+ bl __bad_page_fault
+ b interrupt_return
GEN_KVM instruction_access
@@ -3235,83 +3244,3 @@ disable_machine_check:
RFI_TO_KERNEL
1: mtlr r0
blr
-
-/*
- * Hash table stuff
- */
- .balign IFETCH_ALIGN_BYTES
-do_hash_page:
-#ifdef CONFIG_PPC_BOOK3S_64
- lis r0,(DSISR_BAD_FAULT_64S | DSISR_KEYFAULT)@h
- ori r0,r0,DSISR_BAD_FAULT_64S@l
- and. r0,r5,r0 /* weird error? */
- bne- handle_page_fault /* if not, try to insert a HPTE */
-
- /*
- * If we are in an "NMI" (e.g., an interrupt when soft-disabled), then
- * don't call hash_page, just fail the fault. This is required to
- * prevent re-entrancy problems in the hash code, namely perf
- * interrupts hitting while something holds H_PAGE_BUSY, and taking a
- * hash fault. See the comment in hash_preload().
- */
- ld r11, PACA_THREAD_INFO(r13)
- lwz r0,TI_PREEMPT(r11)
- andis. r0,r0,NMI_MASK@h
- bne 77f
-
- /*
- * r3 contains the trap number
- * r4 contains the faulting address
- * r5 contains dsisr
- * r6 msr
- *
- * at return r3 = 0 for success, 1 for page fault, negative for error
- */
- bl __hash_page /* build HPTE if possible */
- cmpdi r3,0 /* see if __hash_page succeeded */
-
- /* Success */
- beq interrupt_return /* Return from exception on success */
-
- /* Error */
- blt- 13f
-
- /* Reload DAR/DSISR into r4/r5 for handle_page_fault */
- ld r4,_DAR(r1)
- ld r5,_DSISR(r1)
-#endif /* CONFIG_PPC_BOOK3S_64 */
-
-/* Here we have a page fault that hash_page can't handle. */
-handle_page_fault:
- addi r3,r1,STACK_FRAME_OVERHEAD
- bl do_page_fault
- cmpdi r3,0
- beq+ interrupt_return
- mr r5,r3
- addi r3,r1,STACK_FRAME_OVERHEAD
- ld r4,_DAR(r1)
- bl __bad_page_fault
- b interrupt_return
-
-#ifdef CONFIG_PPC_BOOK3S_64
-/* We have a page fault that hash_page could handle but HV refused
- * the PTE insertion
- */
-13: mr r5,r3
- addi r3,r1,STACK_FRAME_OVERHEAD
- ld r4,_DAR(r1)
- bl low_hash_fault
- b interrupt_return
-#endif
-
-/*
- * We come here as a result of a DSI at a point where we don't want
- * to call hash_page, such as when we are accessing memory (possibly
- * user memory) inside a PMU interrupt that occurred while interrupts
- * were soft-disabled. We want to invoke the exception handler for
- * the access, or panic if there isn't a handler.
- */
-77: addi r3,r1,STACK_FRAME_OVERHEAD
- li r5,SIGSEGV
- bl bad_page_fault
- b interrupt_return
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 73b06adb6eeb..e866cae57e2f 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -1512,16 +1512,40 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
}
EXPORT_SYMBOL_GPL(hash_page);
-int __hash_page(unsigned long trap, unsigned long ea, unsigned long dsisr,
- unsigned long msr)
+int do_hash_fault(struct pt_regs *regs, unsigned long ea, unsigned long dsisr)
{
unsigned long access = _PAGE_PRESENT | _PAGE_READ;
unsigned long flags = 0;
- struct mm_struct *mm = current->mm;
- unsigned int region_id = get_region_id(ea);
+ struct mm_struct *mm;
+ unsigned int region_id;
+ int err;
+
+ if (unlikely(dsisr & (DSISR_BAD_FAULT_64S | DSISR_KEYFAULT)))
+ goto page_fault;
+
+ /*
+ * If we are in an "NMI" (e.g., an interrupt when soft-disabled), then
+ * don't call hash_page, just fail the fault. This is required to
+ * prevent re-entrancy problems in the hash code, namely perf
+ * interrupts hitting while something holds H_PAGE_BUSY, and taking a
+ * hash fault. See the comment in hash_preload().
+ *
+ * We come here as a result of a DSI at a point where we don't want
+ * to call hash_page, such as when we are accessing memory (possibly
+ * user memory) inside a PMU interrupt that occurred while interrupts
+ * were soft-disabled. We want to invoke the exception handler for
+ * the access, or panic if there isn't a handler.
+ */
+ if (unlikely(in_nmi())) {
+ bad_page_fault(regs, ea, SIGSEGV);
+ return 0;
+ }
+ region_id = get_region_id(ea);
if ((region_id == VMALLOC_REGION_ID) || (region_id == IO_REGION_ID))
mm = &init_mm;
+ else
+ mm = current->mm;
if (dsisr & DSISR_NOHPTE)
flags |= HPTE_NOHPTE_UPDATE;
@@ -1537,13 +1561,31 @@ int __hash_page(unsigned long trap, unsigned long ea, unsigned long dsisr,
* 2) user space access kernel space.
*/
access |= _PAGE_PRIVILEGED;
- if ((msr & MSR_PR) || (region_id == USER_REGION_ID))
+ if (user_mode(regs) || (region_id == USER_REGION_ID))
access &= ~_PAGE_PRIVILEGED;
- if (trap == 0x400)
+ if (regs->trap == 0x400)
access |= _PAGE_EXEC;
- return hash_page_mm(mm, ea, access, trap, flags);
+ err = hash_page_mm(mm, ea, access, regs->trap, flags);
+ if (unlikely(err < 0)) {
+ // failed to instert a hash PTE due to an hypervisor error
+ if (user_mode(regs)) {
+ if (IS_ENABLED(CONFIG_PPC_SUBPAGE_PROT) && err == -2)
+ _exception(SIGSEGV, regs, SEGV_ACCERR, ea);
+ else
+ _exception(SIGBUS, regs, BUS_ADRERR, ea);
+ } else {
+ bad_page_fault(regs, ea, SIGBUS);
+ }
+ err = 0;
+
+ } else if (err) {
+page_fault:
+ err = do_page_fault(regs, ea, dsisr);
+ }
+
+ return err;
}
#ifdef CONFIG_PPC_MM_SLICES
@@ -1843,27 +1885,6 @@ void flush_hash_range(unsigned long number, int local)
}
}
-/*
- * low_hash_fault is called when we the low level hash code failed
- * to instert a PTE due to an hypervisor error
- */
-void low_hash_fault(struct pt_regs *regs, unsigned long address, int rc)
-{
- enum ctx_state prev_state = exception_enter();
-
- if (user_mode(regs)) {
-#ifdef CONFIG_PPC_SUBPAGE_PROT
- if (rc == -2)
- _exception(SIGSEGV, regs, SEGV_ACCERR, address);
- else
-#endif
- _exception(SIGBUS, regs, BUS_ADRERR, address);
- } else
- bad_page_fault(regs, address, SIGBUS);
-
- exception_exit(prev_state);
-}
-
long hpte_insert_repeating(unsigned long hash, unsigned long vpn,
unsigned long pa, unsigned long rflags,
unsigned long vflags, int psize, int ssize)
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 05/39] powerpc: remove arguments from fault handler functions
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (3 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 04/39] powerpc/64s: move the hash fault handling logic to C Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-27 6:38 ` Christophe Leroy
2021-01-15 16:49 ` [PATCH v6 06/39] powerpc: do_break get registers from regs Nicholas Piggin
` (33 subsequent siblings)
38 siblings, 1 reply; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
Make mm fault handlers all just take the pt_regs * argument and load
DAR/DSISR from that. Make those that return a value return long.
This is done to make the function signatures match other handlers, which
will help with a future patch to add wrappers. Explicit arguments could
be added for performance but that would require more wrapper macro
variants.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/asm-prototypes.h | 4 ++--
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 2 +-
arch/powerpc/include/asm/bug.h | 2 +-
arch/powerpc/kernel/entry_32.S | 7 +------
arch/powerpc/kernel/exceptions-64e.S | 2 --
arch/powerpc/kernel/exceptions-64s.S | 17 ++++-------------
arch/powerpc/kernel/head_40x.S | 10 +++++-----
arch/powerpc/kernel/head_8xx.S | 6 +++---
arch/powerpc/kernel/head_book3s_32.S | 5 ++---
arch/powerpc/kernel/head_booke.h | 4 +---
arch/powerpc/mm/book3s64/hash_utils.c | 8 +++++---
arch/powerpc/mm/book3s64/slb.c | 11 +++++++----
arch/powerpc/mm/fault.c | 5 ++---
13 files changed, 34 insertions(+), 49 deletions(-)
diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
index d0b832cbbec8..22c9d08fa3a4 100644
--- a/arch/powerpc/include/asm/asm-prototypes.h
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -82,8 +82,8 @@ void kernel_bad_stack(struct pt_regs *regs);
void system_reset_exception(struct pt_regs *regs);
void machine_check_exception(struct pt_regs *regs);
void emulation_assist_interrupt(struct pt_regs *regs);
-long do_slb_fault(struct pt_regs *regs, unsigned long ea);
-void do_bad_slb_fault(struct pt_regs *regs, unsigned long ea, long err);
+long do_slb_fault(struct pt_regs *regs);
+void do_bad_slb_fault(struct pt_regs *regs);
/* signals, syscalls and interrupts */
long sys_swapcontext(struct ucontext __user *old_ctx,
diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index 60a669379aa0..b9968e297da2 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -454,7 +454,7 @@ static inline unsigned long hpt_hash(unsigned long vpn,
#define HPTE_NOHPTE_UPDATE 0x2
#define HPTE_USE_KERNEL_KEY 0x4
-int do_hash_fault(struct pt_regs *regs, unsigned long ea, unsigned long dsisr);
+long do_hash_fault(struct pt_regs *regs);
extern int __hash_page_4K(unsigned long ea, unsigned long access,
unsigned long vsid, pte_t *ptep, unsigned long trap,
unsigned long flags, int ssize, int subpage_prot);
diff --git a/arch/powerpc/include/asm/bug.h b/arch/powerpc/include/asm/bug.h
index 464f8ca8a5c9..f7827e993196 100644
--- a/arch/powerpc/include/asm/bug.h
+++ b/arch/powerpc/include/asm/bug.h
@@ -111,7 +111,7 @@
#ifndef __ASSEMBLY__
struct pt_regs;
-extern int do_page_fault(struct pt_regs *, unsigned long, unsigned long);
+long do_page_fault(struct pt_regs *);
extern void bad_page_fault(struct pt_regs *, unsigned long, int);
void __bad_page_fault(struct pt_regs *regs, unsigned long address, int sig);
extern void _exception(int, struct pt_regs *, int, unsigned long);
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 238eacfda7b0..d6ea3f2d6cc0 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -276,8 +276,7 @@ reenable_mmu:
* We save a bunch of GPRs,
* r3 can be different from GPR3(r1) at this point, r9 and r11
* contains the old MSR and handler address respectively,
- * r4 & r5 can contain page fault arguments that need to be passed
- * along as well. r0, r6-r8, r12, CCR, CTR, XER etc... are left
+ * r0, r4-r8, r12, CCR, CTR, XER etc... are left
* clobbered as they aren't useful past this point.
*/
@@ -285,15 +284,11 @@ reenable_mmu:
stw r9,8(r1)
stw r11,12(r1)
stw r3,16(r1)
- stw r4,20(r1)
- stw r5,24(r1)
/* If we are disabling interrupts (normal case), simply log it with
* lockdep
*/
1: bl trace_hardirqs_off
- lwz r5,24(r1)
- lwz r4,20(r1)
lwz r3,16(r1)
lwz r11,12(r1)
lwz r9,8(r1)
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 74d07dc0bb48..43e71d86dcbf 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -1011,8 +1011,6 @@ storage_fault_common:
std r14,_DAR(r1)
std r15,_DSISR(r1)
addi r3,r1,STACK_FRAME_OVERHEAD
- mr r4,r14
- mr r5,r15
ld r14,PACA_EXGEN+EX_R14(r13)
ld r15,PACA_EXGEN+EX_R15(r13)
bl do_page_fault
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 07aba8af99d3..839dcb94eea7 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1438,10 +1438,9 @@ EXC_VIRT_BEGIN(data_access, 0x4300, 0x80)
EXC_VIRT_END(data_access, 0x4300, 0x80)
EXC_COMMON_BEGIN(data_access_common)
GEN_COMMON data_access
- ld r4,_DAR(r1)
- ld r5,_DSISR(r1)
+ ld r4,_DSISR(r1)
addi r3,r1,STACK_FRAME_OVERHEAD
- andis. r0,r5,DSISR_DABRMATCH@h
+ andis. r0,r4,DSISR_DABRMATCH@h
bne- 1f
BEGIN_MMU_FTR_SECTION
bl do_hash_fault
@@ -1504,10 +1503,9 @@ EXC_VIRT_BEGIN(data_access_slb, 0x4380, 0x80)
EXC_VIRT_END(data_access_slb, 0x4380, 0x80)
EXC_COMMON_BEGIN(data_access_slb_common)
GEN_COMMON data_access_slb
- ld r4,_DAR(r1)
- addi r3,r1,STACK_FRAME_OVERHEAD
BEGIN_MMU_FTR_SECTION
/* HPT case, do SLB fault */
+ addi r3,r1,STACK_FRAME_OVERHEAD
bl do_slb_fault
cmpdi r3,0
bne- 1f
@@ -1519,8 +1517,6 @@ MMU_FTR_SECTION_ELSE
ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
std r3,RESULT(r1)
RECONCILE_IRQ_STATE(r10, r11)
- ld r4,_DAR(r1)
- ld r5,RESULT(r1)
addi r3,r1,STACK_FRAME_OVERHEAD
bl do_bad_slb_fault
b interrupt_return
@@ -1555,8 +1551,6 @@ EXC_VIRT_BEGIN(instruction_access, 0x4400, 0x80)
EXC_VIRT_END(instruction_access, 0x4400, 0x80)
EXC_COMMON_BEGIN(instruction_access_common)
GEN_COMMON instruction_access
- ld r4,_DAR(r1)
- ld r5,_DSISR(r1)
addi r3,r1,STACK_FRAME_OVERHEAD
BEGIN_MMU_FTR_SECTION
bl do_hash_fault
@@ -1602,10 +1596,9 @@ EXC_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x80)
EXC_VIRT_END(instruction_access_slb, 0x4480, 0x80)
EXC_COMMON_BEGIN(instruction_access_slb_common)
GEN_COMMON instruction_access_slb
- ld r4,_DAR(r1)
- addi r3,r1,STACK_FRAME_OVERHEAD
BEGIN_MMU_FTR_SECTION
/* HPT case, do SLB fault */
+ addi r3,r1,STACK_FRAME_OVERHEAD
bl do_slb_fault
cmpdi r3,0
bne- 1f
@@ -1617,8 +1610,6 @@ MMU_FTR_SECTION_ELSE
ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
std r3,RESULT(r1)
RECONCILE_IRQ_STATE(r10, r11)
- ld r4,_DAR(r1)
- ld r5,RESULT(r1)
addi r3,r1,STACK_FRAME_OVERHEAD
bl do_bad_slb_fault
b interrupt_return
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index a1ae00689e0f..3c5577ac4dc8 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -179,9 +179,9 @@ _ENTRY(saved_ksp_limit)
*/
START_EXCEPTION(0x0300, DataStorage)
EXCEPTION_PROLOG
- mfspr r5, SPRN_ESR /* Grab the ESR, save it, pass arg3 */
+ mfspr r5, SPRN_ESR /* Grab the ESR, save it */
stw r5, _ESR(r11)
- mfspr r4, SPRN_DEAR /* Grab the DEAR, save it, pass arg2 */
+ mfspr r4, SPRN_DEAR /* Grab the DEAR, save it */
stw r4, _DEAR(r11)
EXC_XFER_LITE(0x300, handle_page_fault)
@@ -191,9 +191,9 @@ _ENTRY(saved_ksp_limit)
*/
START_EXCEPTION(0x0400, InstructionAccess)
EXCEPTION_PROLOG
- mr r4,r12 /* Pass SRR0 as arg2 */
- stw r4, _DEAR(r11)
- li r5,0 /* Pass zero as arg3 */
+ li r5,0
+ stw r5, _ESR(r11) /* Zero ESR */
+ stw r12, _DEAR(r11) /* SRR0 as DEAR */
EXC_XFER_LITE(0x400, handle_page_fault)
/* 0x0500 - External Interrupt Exception */
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 52702f3db6df..0b2c247cfdff 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -312,14 +312,14 @@ DataStoreTLBMiss:
. = 0x1300
InstructionTLBError:
EXCEPTION_PROLOG
- mr r4,r12
andis. r5,r9,DSISR_SRR1_MATCH_32S@h /* Filter relevant SRR1 bits */
andis. r10,r9,SRR1_ISI_NOPT@h
beq+ .Litlbie
- tlbie r4
+ tlbie r12
/* 0x400 is InstructionAccess exception, needed by bad_page_fault() */
.Litlbie:
- stw r4, _DAR(r11)
+ stw r12, _DAR(r11)
+ stw r5, _DSISR(r11)
EXC_XFER_LITE(0x400, handle_page_fault)
/* This is the data TLB error on the MPC8xx. This could be due to
diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
index fc9a12768a14..94ad1372c490 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -352,9 +352,9 @@ BEGIN_MMU_FTR_SECTION
bl hash_page
END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
#endif /* CONFIG_VMAP_STACK */
-1: mr r4,r12
andis. r5,r9,DSISR_SRR1_MATCH_32S@h /* Filter relevant SRR1 bits */
- stw r4, _DAR(r11)
+ stw r5, _DSISR(r11)
+ stw r12, _DAR(r11)
EXC_XFER_LITE(0x400, handle_page_fault)
/* External interrupt */
@@ -676,7 +676,6 @@ handle_page_fault_tramp_1:
#ifdef CONFIG_VMAP_STACK
EXCEPTION_PROLOG_2 handle_dar_dsisr=1
#endif
- lwz r4, _DAR(r11)
lwz r5, _DSISR(r11)
/* fall through */
handle_page_fault_tramp_2:
diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index 74e230c200fb..0fbdacc7fab7 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -476,9 +476,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
NORMAL_EXCEPTION_PROLOG(INST_STORAGE); \
mfspr r5,SPRN_ESR; /* Grab the ESR and save it */ \
stw r5,_ESR(r11); \
- mr r4,r12; /* Pass SRR0 as arg2 */ \
- stw r4, _DEAR(r11); \
- li r5,0; /* Pass zero as arg3 */ \
+ stw r12, _DEAR(r11); /* Pass SRR0 as arg2 */ \
EXC_XFER_LITE(0x0400, handle_page_fault)
#define ALIGNMENT_EXCEPTION \
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index e866cae57e2f..9a499af3eebf 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -1512,13 +1512,15 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
}
EXPORT_SYMBOL_GPL(hash_page);
-int do_hash_fault(struct pt_regs *regs, unsigned long ea, unsigned long dsisr)
+long do_hash_fault(struct pt_regs *regs)
{
+ unsigned long ea = regs->dar;
+ unsigned long dsisr = regs->dsisr;
unsigned long access = _PAGE_PRESENT | _PAGE_READ;
unsigned long flags = 0;
struct mm_struct *mm;
unsigned int region_id;
- int err;
+ long err;
if (unlikely(dsisr & (DSISR_BAD_FAULT_64S | DSISR_KEYFAULT)))
goto page_fault;
@@ -1582,7 +1584,7 @@ int do_hash_fault(struct pt_regs *regs, unsigned long ea, unsigned long dsisr)
} else if (err) {
page_fault:
- err = do_page_fault(regs, ea, dsisr);
+ err = do_page_fault(regs);
}
return err;
diff --git a/arch/powerpc/mm/book3s64/slb.c b/arch/powerpc/mm/book3s64/slb.c
index 584567970c11..985902ce0272 100644
--- a/arch/powerpc/mm/book3s64/slb.c
+++ b/arch/powerpc/mm/book3s64/slb.c
@@ -813,8 +813,9 @@ static long slb_allocate_user(struct mm_struct *mm, unsigned long ea)
return slb_insert_entry(ea, context, flags, ssize, false);
}
-long do_slb_fault(struct pt_regs *regs, unsigned long ea)
+long do_slb_fault(struct pt_regs *regs)
{
+ unsigned long ea = regs->dar;
unsigned long id = get_region_id(ea);
/* IRQs are not reconciled here, so can't check irqs_disabled */
@@ -865,13 +866,15 @@ long do_slb_fault(struct pt_regs *regs, unsigned long ea)
}
}
-void do_bad_slb_fault(struct pt_regs *regs, unsigned long ea, long err)
+void do_bad_slb_fault(struct pt_regs *regs)
{
+ int err = regs->result;
+
if (err == -EFAULT) {
if (user_mode(regs))
- _exception(SIGSEGV, regs, SEGV_BNDERR, ea);
+ _exception(SIGSEGV, regs, SEGV_BNDERR, regs->dar);
else
- bad_page_fault(regs, ea, SIGSEGV);
+ bad_page_fault(regs, regs->dar, SIGSEGV);
} else if (err == -EINVAL) {
unrecoverable_exception(regs);
} else {
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 8961b44f350c..273ff845eccf 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -542,12 +542,11 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
}
NOKPROBE_SYMBOL(__do_page_fault);
-int do_page_fault(struct pt_regs *regs, unsigned long address,
- unsigned long error_code)
+long do_page_fault(struct pt_regs *regs)
{
const struct exception_table_entry *entry;
enum ctx_state prev_state = exception_enter();
- int rc = __do_page_fault(regs, address, error_code);
+ int rc = __do_page_fault(regs, regs->dar, regs->dsisr);
exception_exit(prev_state);
if (likely(!rc))
return 0;
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 06/39] powerpc: do_break get registers from regs
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (4 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 05/39] powerpc: remove arguments from fault handler functions Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 07/39] powerpc: bad_page_fault " Nicholas Piggin
` (32 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
Similar to the previous patch this makes interrupt handler function
types more regular so they can be wrapped with the next patch.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/debug.h | 3 +--
arch/powerpc/kernel/head_8xx.S | 5 ++---
arch/powerpc/kernel/process.c | 7 +++----
3 files changed, 6 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/include/asm/debug.h b/arch/powerpc/include/asm/debug.h
index ec57daf87f40..0550eceab3ca 100644
--- a/arch/powerpc/include/asm/debug.h
+++ b/arch/powerpc/include/asm/debug.h
@@ -52,8 +52,7 @@ extern void do_send_trap(struct pt_regs *regs, unsigned long address,
unsigned long error_code, int brkpt);
#else
-extern void do_break(struct pt_regs *regs, unsigned long address,
- unsigned long error_code);
+void do_break(struct pt_regs *regs);
#endif
#endif /* _ASM_POWERPC_DEBUG_H */
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 0b2c247cfdff..7869db974185 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -364,10 +364,9 @@ do_databreakpoint:
addi r3,r1,STACK_FRAME_OVERHEAD
mfspr r4,SPRN_BAR
stw r4,_DAR(r11)
-#ifdef CONFIG_VMAP_STACK
- lwz r5,_DSISR(r11)
-#else
+#ifndef CONFIG_VMAP_STACK
mfspr r5,SPRN_DSISR
+ stw r5,_DSISR(r11)
#endif
EXC_XFER_STD(0x1c00, do_break)
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index a66f435dabbf..4f0f81e9420b 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -659,11 +659,10 @@ static void do_break_handler(struct pt_regs *regs)
}
}
-void do_break (struct pt_regs *regs, unsigned long address,
- unsigned long error_code)
+void do_break(struct pt_regs *regs)
{
current->thread.trap_nr = TRAP_HWBKPT;
- if (notify_die(DIE_DABR_MATCH, "dabr_match", regs, error_code,
+ if (notify_die(DIE_DABR_MATCH, "dabr_match", regs, regs->dsisr,
11, SIGSEGV) == NOTIFY_STOP)
return;
@@ -681,7 +680,7 @@ void do_break (struct pt_regs *regs, unsigned long address,
do_break_handler(regs);
/* Deliver the signal to userspace */
- force_sig_fault(SIGTRAP, TRAP_HWBKPT, (void __user *)address);
+ force_sig_fault(SIGTRAP, TRAP_HWBKPT, (void __user *)regs->dar);
}
#endif /* CONFIG_PPC_ADV_DEBUG_REGS */
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 07/39] powerpc: bad_page_fault get registers from regs
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (5 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 06/39] powerpc: do_break get registers from regs Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 17:09 ` Christophe Leroy
2021-01-15 16:49 ` [PATCH v6 08/39] powerpc: rearrange do_page_fault error case to be inside exception_enter Nicholas Piggin
` (31 subsequent siblings)
38 siblings, 1 reply; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
Similar to the previous patch this makes interrupt handler function
types more regular so they can be wrapped with the next patch.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/bug.h | 5 +++--
arch/powerpc/kernel/entry_32.S | 3 +--
arch/powerpc/kernel/exceptions-64e.S | 3 +--
arch/powerpc/kernel/exceptions-64s.S | 4 +---
arch/powerpc/kernel/traps.c | 2 +-
arch/powerpc/mm/book3s64/hash_utils.c | 4 ++--
arch/powerpc/mm/book3s64/slb.c | 2 +-
arch/powerpc/mm/fault.c | 13 ++++++++++---
arch/powerpc/platforms/8xx/machine_check.c | 2 +-
9 files changed, 21 insertions(+), 17 deletions(-)
diff --git a/arch/powerpc/include/asm/bug.h b/arch/powerpc/include/asm/bug.h
index f7827e993196..8f09ddae9305 100644
--- a/arch/powerpc/include/asm/bug.h
+++ b/arch/powerpc/include/asm/bug.h
@@ -112,8 +112,9 @@
struct pt_regs;
long do_page_fault(struct pt_regs *);
-extern void bad_page_fault(struct pt_regs *, unsigned long, int);
-void __bad_page_fault(struct pt_regs *regs, unsigned long address, int sig);
+void bad_page_fault(struct pt_regs *, int);
+void __bad_page_fault(struct pt_regs *regs, int sig);
+void do_bad_page_fault_segv(struct pt_regs *regs);
extern void _exception(int, struct pt_regs *, int, unsigned long);
extern void _exception_pkey(struct pt_regs *, unsigned long, int);
extern void die(const char *, struct pt_regs *, long);
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index d6ea3f2d6cc0..b102b40c4988 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -672,9 +672,8 @@ handle_page_fault:
lwz r0,_TRAP(r1)
clrrwi r0,r0,1
stw r0,_TRAP(r1)
- mr r5,r3
+ mr r4,r3 /* err arg for bad_page_fault */
addi r3,r1,STACK_FRAME_OVERHEAD
- lwz r4,_DAR(r1)
bl __bad_page_fault
b ret_from_except_full
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 43e71d86dcbf..52421042a020 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -1018,9 +1018,8 @@ storage_fault_common:
bne- 1f
b ret_from_except_lite
1: bl save_nvgprs
- mr r5,r3
+ mr r4,r3
addi r3,r1,STACK_FRAME_OVERHEAD
- ld r4,_DAR(r1)
bl __bad_page_fault
b ret_from_except
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 839dcb94eea7..b90d3cde14cf 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -2151,9 +2151,7 @@ EXC_COMMON_BEGIN(h_data_storage_common)
GEN_COMMON h_data_storage
addi r3,r1,STACK_FRAME_OVERHEAD
BEGIN_MMU_FTR_SECTION
- ld r4,_DAR(r1)
- li r5,SIGSEGV
- bl bad_page_fault
+ bl do_bad_page_fault_segv
MMU_FTR_SECTION_ELSE
bl unknown_exception
ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_TYPE_RADIX)
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 3ec7b443fe6b..f3f6af3141ee 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -1612,7 +1612,7 @@ void alignment_exception(struct pt_regs *regs)
if (user_mode(regs))
_exception(sig, regs, code, regs->dar);
else
- bad_page_fault(regs, regs->dar, sig);
+ bad_page_fault(regs, sig);
bail:
exception_exit(prev_state);
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 9a499af3eebf..1a270cc37d97 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -1539,7 +1539,7 @@ long do_hash_fault(struct pt_regs *regs)
* the access, or panic if there isn't a handler.
*/
if (unlikely(in_nmi())) {
- bad_page_fault(regs, ea, SIGSEGV);
+ bad_page_fault(regs, SIGSEGV);
return 0;
}
@@ -1578,7 +1578,7 @@ long do_hash_fault(struct pt_regs *regs)
else
_exception(SIGBUS, regs, BUS_ADRERR, ea);
} else {
- bad_page_fault(regs, ea, SIGBUS);
+ bad_page_fault(regs, SIGBUS);
}
err = 0;
diff --git a/arch/powerpc/mm/book3s64/slb.c b/arch/powerpc/mm/book3s64/slb.c
index 985902ce0272..c581548b533f 100644
--- a/arch/powerpc/mm/book3s64/slb.c
+++ b/arch/powerpc/mm/book3s64/slb.c
@@ -874,7 +874,7 @@ void do_bad_slb_fault(struct pt_regs *regs)
if (user_mode(regs))
_exception(SIGSEGV, regs, SEGV_BNDERR, regs->dar);
else
- bad_page_fault(regs, regs->dar, SIGSEGV);
+ bad_page_fault(regs, SIGSEGV);
} else if (err == -EINVAL) {
unrecoverable_exception(regs);
} else {
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 273ff845eccf..e476d7701413 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -566,7 +566,7 @@ NOKPROBE_SYMBOL(do_page_fault);
* It is called from the DSI and ISI handlers in head.S and from some
* of the procedures in traps.c.
*/
-void __bad_page_fault(struct pt_regs *regs, unsigned long address, int sig)
+void __bad_page_fault(struct pt_regs *regs, int sig)
{
int is_write = page_fault_is_write(regs->dsisr);
@@ -604,7 +604,7 @@ void __bad_page_fault(struct pt_regs *regs, unsigned long address, int sig)
die("Kernel access of bad area", regs, sig);
}
-void bad_page_fault(struct pt_regs *regs, unsigned long address, int sig)
+void bad_page_fault(struct pt_regs *regs, int sig)
{
const struct exception_table_entry *entry;
@@ -613,5 +613,12 @@ void bad_page_fault(struct pt_regs *regs, unsigned long address, int sig)
if (entry)
instruction_pointer_set(regs, extable_fixup(entry));
else
- __bad_page_fault(regs, address, sig);
+ __bad_page_fault(regs, sig);
}
+
+#ifdef CONFIG_PPC_BOOK3S_64
+void do_bad_page_fault_segv(struct pt_regs *regs)
+{
+ bad_page_fault(regs, SIGSEGV);
+}
+#endif
diff --git a/arch/powerpc/platforms/8xx/machine_check.c b/arch/powerpc/platforms/8xx/machine_check.c
index 88dedf38eccd..656365975895 100644
--- a/arch/powerpc/platforms/8xx/machine_check.c
+++ b/arch/powerpc/platforms/8xx/machine_check.c
@@ -26,7 +26,7 @@ int machine_check_8xx(struct pt_regs *regs)
* to deal with that than having a wart in the mcheck handler.
* -- BenH
*/
- bad_page_fault(regs, regs->dar, SIGBUS);
+ bad_page_fault(regs, SIGBUS);
return 1;
#else
return 0;
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 08/39] powerpc: rearrange do_page_fault error case to be inside exception_enter
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (6 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 07/39] powerpc: bad_page_fault " Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-28 9:25 ` Christophe Leroy
2021-01-15 16:49 ` [PATCH v6 09/39] powerpc/64s: move bad_page_fault handling to C Nicholas Piggin
` (30 subsequent siblings)
38 siblings, 1 reply; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
This keeps the context tracking over the entire interrupt handler which
helps later with moving context tracking into interrupt wrappers.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/mm/fault.c | 28 ++++++++++++++++------------
1 file changed, 16 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index e476d7701413..e4121fd9fcf1 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -544,20 +544,24 @@ NOKPROBE_SYMBOL(__do_page_fault);
long do_page_fault(struct pt_regs *regs)
{
- const struct exception_table_entry *entry;
- enum ctx_state prev_state = exception_enter();
- int rc = __do_page_fault(regs, regs->dar, regs->dsisr);
- exception_exit(prev_state);
- if (likely(!rc))
- return 0;
-
- entry = search_exception_tables(regs->nip);
- if (unlikely(!entry))
- return rc;
+ enum ctx_state prev_state;
+ long err;
+
+ prev_state = exception_enter();
+ err = __do_page_fault(regs, regs->dar, regs->dsisr);
+ if (unlikely(err)) {
+ const struct exception_table_entry *entry;
+
+ entry = search_exception_tables(regs->nip);
+ if (likely(entry)) {
+ instruction_pointer_set(regs, extable_fixup(entry));
+ err = 0;
+ }
+ }
- instruction_pointer_set(regs, extable_fixup(entry));
+ exception_exit(prev_state);
- return 0;
+ return err;
}
NOKPROBE_SYMBOL(do_page_fault);
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 09/39] powerpc/64s: move bad_page_fault handling to C
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (7 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 08/39] powerpc: rearrange do_page_fault error case to be inside exception_enter Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 10/39] powerpc/64s: split do_hash_fault Nicholas Piggin
` (29 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
This simplifies code, and it is also useful when introducing
interrupt handler wrappers when introducing wrapper functionality
that doesn't cope with asm entry code calling into more than one
handler function.
32-bit and 64e still have some such cases, which limits some ways
they can use interrupt wrappers.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/exceptions-64s.S | 12 ------------
arch/powerpc/mm/fault.c | 4 ++++
2 files changed, 4 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index b90d3cde14cf..e69a912c2cc6 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1447,12 +1447,6 @@ BEGIN_MMU_FTR_SECTION
MMU_FTR_SECTION_ELSE
bl do_page_fault
ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
- cmpdi r3,0
- beq+ interrupt_return
- mr r5,r3
- addi r3,r1,STACK_FRAME_OVERHEAD
- ld r4,_DAR(r1)
- bl __bad_page_fault
b interrupt_return
1: bl do_break
@@ -1557,12 +1551,6 @@ BEGIN_MMU_FTR_SECTION
MMU_FTR_SECTION_ELSE
bl do_page_fault
ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
- cmpdi r3,0
- beq+ interrupt_return
- mr r5,r3
- addi r3,r1,STACK_FRAME_OVERHEAD
- ld r4,_DAR(r1)
- bl __bad_page_fault
b interrupt_return
GEN_KVM instruction_access
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index e4121fd9fcf1..965c89e63997 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -556,6 +556,10 @@ long do_page_fault(struct pt_regs *regs)
if (likely(entry)) {
instruction_pointer_set(regs, extable_fixup(entry));
err = 0;
+ } else if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) {
+ /* 32 and 64e handle this in asm */
+ __bad_page_fault(regs, err);
+ err = 0;
}
}
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 10/39] powerpc/64s: split do_hash_fault
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (8 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 09/39] powerpc/64s: move bad_page_fault handling to C Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 11/39] powerpc/mm: Remove stale do_page_fault comment referring to SLB faults Nicholas Piggin
` (28 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
This is required for subsequent interrupt wrapper implementation.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/mm/book3s64/hash_utils.c | 56 ++++++++++++++++-----------
1 file changed, 33 insertions(+), 23 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 1a270cc37d97..d7d3a80a51d4 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -1512,7 +1512,7 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
}
EXPORT_SYMBOL_GPL(hash_page);
-long do_hash_fault(struct pt_regs *regs)
+static long __do_hash_fault(struct pt_regs *regs)
{
unsigned long ea = regs->dar;
unsigned long dsisr = regs->dsisr;
@@ -1522,27 +1522,6 @@ long do_hash_fault(struct pt_regs *regs)
unsigned int region_id;
long err;
- if (unlikely(dsisr & (DSISR_BAD_FAULT_64S | DSISR_KEYFAULT)))
- goto page_fault;
-
- /*
- * If we are in an "NMI" (e.g., an interrupt when soft-disabled), then
- * don't call hash_page, just fail the fault. This is required to
- * prevent re-entrancy problems in the hash code, namely perf
- * interrupts hitting while something holds H_PAGE_BUSY, and taking a
- * hash fault. See the comment in hash_preload().
- *
- * We come here as a result of a DSI at a point where we don't want
- * to call hash_page, such as when we are accessing memory (possibly
- * user memory) inside a PMU interrupt that occurred while interrupts
- * were soft-disabled. We want to invoke the exception handler for
- * the access, or panic if there isn't a handler.
- */
- if (unlikely(in_nmi())) {
- bad_page_fault(regs, SIGSEGV);
- return 0;
- }
-
region_id = get_region_id(ea);
if ((region_id == VMALLOC_REGION_ID) || (region_id == IO_REGION_ID))
mm = &init_mm;
@@ -1581,8 +1560,39 @@ long do_hash_fault(struct pt_regs *regs)
bad_page_fault(regs, SIGBUS);
}
err = 0;
+ }
+
+ return err;
+}
+
+long do_hash_fault(struct pt_regs *regs)
+{
+ unsigned long dsisr = regs->dsisr;
+ long err;
+
+ if (unlikely(dsisr & (DSISR_BAD_FAULT_64S | DSISR_KEYFAULT)))
+ goto page_fault;
+
+ /*
+ * If we are in an "NMI" (e.g., an interrupt when soft-disabled), then
+ * don't call hash_page, just fail the fault. This is required to
+ * prevent re-entrancy problems in the hash code, namely perf
+ * interrupts hitting while something holds H_PAGE_BUSY, and taking a
+ * hash fault. See the comment in hash_preload().
+ *
+ * We come here as a result of a DSI at a point where we don't want
+ * to call hash_page, such as when we are accessing memory (possibly
+ * user memory) inside a PMU interrupt that occurred while interrupts
+ * were soft-disabled. We want to invoke the exception handler for
+ * the access, or panic if there isn't a handler.
+ */
+ if (unlikely(in_nmi())) {
+ bad_page_fault(regs, SIGSEGV);
+ return 0;
+ }
- } else if (err) {
+ err = __do_hash_fault(regs);
+ if (err) {
page_fault:
err = do_page_fault(regs);
}
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 11/39] powerpc/mm: Remove stale do_page_fault comment referring to SLB faults
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (9 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 10/39] powerpc/64s: split do_hash_fault Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 12/39] powerpc/64s: slb comment update Nicholas Piggin
` (27 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
SLB faults no longer call do_page_fault, this was removed somewhere
between 2.6.0 and 2.6.12.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/mm/fault.c | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 965c89e63997..900901d0038e 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -377,13 +377,11 @@ static void sanity_check_fault(bool is_write, bool is_user,
/*
* For 600- and 800-family processors, the error_code parameter is DSISR
- * for a data fault, SRR1 for an instruction fault. For 400-family processors
- * the error_code parameter is ESR for a data fault, 0 for an instruction
- * fault.
- * For 64-bit processors, the error_code parameter is
- * - DSISR for a non-SLB data access fault,
- * - SRR1 & 0x08000000 for a non-SLB instruction access fault
- * - 0 any SLB fault.
+ * for a data fault, SRR1 for an instruction fault.
+ * For 400-family processors the error_code parameter is ESR for a data fault,
+ * 0 for an instruction fault.
+ * For 64-bit processors, the error_code parameter is DSISR for a data access
+ * fault, SRR1 & 0x08000000 for an instruction access fault.
*
* The return value is 0 if the fault was handled, or the signal
* number if this is a kernel fault that can't be handled here.
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 12/39] powerpc/64s: slb comment update
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (10 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 11/39] powerpc/mm: Remove stale do_page_fault comment referring to SLB faults Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 13/39] powerpc/traps: add NOKPROBE_SYMBOL for sreset and mce Nicholas Piggin
` (26 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
This makes a small improvement to the description of the SLB interrupt
environment. Move the memory access restrictions into one paragraph,
and the interrupt restrictions into the next rather than mix them.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/mm/book3s64/slb.c | 28 +++++++++++++++-------------
1 file changed, 15 insertions(+), 13 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/slb.c b/arch/powerpc/mm/book3s64/slb.c
index c581548b533f..14c62b685f0c 100644
--- a/arch/powerpc/mm/book3s64/slb.c
+++ b/arch/powerpc/mm/book3s64/slb.c
@@ -825,19 +825,21 @@ long do_slb_fault(struct pt_regs *regs)
return -EINVAL;
/*
- * SLB kernel faults must be very careful not to touch anything
- * that is not bolted. E.g., PACA and global variables are okay,
- * mm->context stuff is not.
- *
- * SLB user faults can access all of kernel memory, but must be
- * careful not to touch things like IRQ state because it is not
- * "reconciled" here. The difficulty is that we must use
- * fast_exception_return to return from kernel SLB faults without
- * looking at possible non-bolted memory. We could test user vs
- * kernel faults in the interrupt handler asm and do a full fault,
- * reconcile, ret_from_except for user faults which would make them
- * first class kernel code. But for performance it's probably nicer
- * if they go via fast_exception_return too.
+ * SLB kernel faults must be very careful not to touch anything that is
+ * not bolted. E.g., PACA and global variables are okay, mm->context
+ * stuff is not. SLB user faults may access all of memory (and induce
+ * one recursive SLB kernel fault), so the kernel fault must not
+ * trample on the user fault state at those points.
+ */
+
+ /*
+ * The interrupt state is not reconciled, for performance, so that
+ * fast_interrupt_return can be used. The handler must not touch local
+ * irq state, or schedule. We could test for usermode and upgrade to a
+ * normal process context (synchronous) interrupt for those, which
+ * would make them first-class kernel code and able to be traced and
+ * instrumented, although performance would suffer a bit, it would
+ * probably be a good tradeoff.
*/
if (id >= LINEAR_MAP_REGION_ID) {
long err;
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 13/39] powerpc/traps: add NOKPROBE_SYMBOL for sreset and mce
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (11 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 12/39] powerpc/64s: slb comment update Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 14/39] powerpc/perf: move perf irq/nmi handling details into traps.c Nicholas Piggin
` (25 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
These NMIs could fire any time including inside kprobe code, so
exclude them from kprobes.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/traps.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index f3f6af3141ee..738370519937 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -517,6 +517,7 @@ void system_reset_exception(struct pt_regs *regs)
/* What should we do here? We could issue a shutdown or hard reset. */
}
+NOKPROBE_SYMBOL(system_reset_exception);
/*
* I/O accesses can cause machine checks on powermacs.
@@ -843,6 +844,7 @@ void machine_check_exception(struct pt_regs *regs)
bail:
if (nmi) nmi_exit();
}
+NOKPROBE_SYMBOL(machine_check_exception);
void SMIException(struct pt_regs *regs)
{
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 14/39] powerpc/perf: move perf irq/nmi handling details into traps.c
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (12 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 13/39] powerpc/traps: add NOKPROBE_SYMBOL for sreset and mce Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-19 10:24 ` Athira Rajeev
2021-01-15 16:49 ` [PATCH v6 15/39] powerpc/time: move timer_broadcast_interrupt prototype to asm/time.h Nicholas Piggin
` (24 subsequent siblings)
38 siblings, 1 reply; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
This is required in order to allow more significant differences between
NMI type interrupt handlers and regular asynchronous handlers.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/traps.c | 31 +++++++++++++++++++++++++++-
arch/powerpc/perf/core-book3s.c | 35 ++------------------------------
arch/powerpc/perf/core-fsl-emb.c | 25 -----------------------
3 files changed, 32 insertions(+), 59 deletions(-)
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 738370519937..bd55f201115b 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -1892,11 +1892,40 @@ void vsx_unavailable_tm(struct pt_regs *regs)
}
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
-void performance_monitor_exception(struct pt_regs *regs)
+static void performance_monitor_exception_nmi(struct pt_regs *regs)
+{
+ nmi_enter();
+
+ __this_cpu_inc(irq_stat.pmu_irqs);
+
+ perf_irq(regs);
+
+ nmi_exit();
+}
+
+static void performance_monitor_exception_async(struct pt_regs *regs)
{
+ irq_enter();
+
__this_cpu_inc(irq_stat.pmu_irqs);
perf_irq(regs);
+
+ irq_exit();
+}
+
+void performance_monitor_exception(struct pt_regs *regs)
+{
+ /*
+ * On 64-bit, if perf interrupts hit in a local_irq_disable
+ * (soft-masked) region, we consider them as NMIs. This is required to
+ * prevent hash faults on user addresses when reading callchains (and
+ * looks better from an irq tracing perspective).
+ */
+ if (IS_ENABLED(CONFIG_PPC64) && unlikely(arch_irq_disabled_regs(regs)))
+ performance_monitor_exception_nmi(regs);
+ else
+ performance_monitor_exception_async(regs);
}
#ifdef CONFIG_PPC_ADV_DEBUG_REGS
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
index 28206b1fe172..9fd06010e8b6 100644
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -110,10 +110,6 @@ static inline void perf_read_regs(struct pt_regs *regs)
{
regs->result = 0;
}
-static inline int perf_intr_is_nmi(struct pt_regs *regs)
-{
- return 0;
-}
static inline int siar_valid(struct pt_regs *regs)
{
@@ -353,15 +349,6 @@ static inline void perf_read_regs(struct pt_regs *regs)
regs->result = use_siar;
}
-/*
- * If interrupts were soft-disabled when a PMU interrupt occurs, treat
- * it as an NMI.
- */
-static inline int perf_intr_is_nmi(struct pt_regs *regs)
-{
- return (regs->softe & IRQS_DISABLED);
-}
-
/*
* On processors like P7+ that have the SIAR-Valid bit, marked instructions
* must be sampled only if the SIAR-valid bit is set.
@@ -2279,7 +2266,6 @@ static void __perf_event_interrupt(struct pt_regs *regs)
struct perf_event *event;
unsigned long val[8];
int found, active;
- int nmi;
if (cpuhw->n_limited)
freeze_limited_counters(cpuhw, mfspr(SPRN_PMC5),
@@ -2287,18 +2273,6 @@ static void __perf_event_interrupt(struct pt_regs *regs)
perf_read_regs(regs);
- /*
- * If perf interrupts hit in a local_irq_disable (soft-masked) region,
- * we consider them as NMIs. This is required to prevent hash faults on
- * user addresses when reading callchains. See the NMI test in
- * do_hash_page.
- */
- nmi = perf_intr_is_nmi(regs);
- if (nmi)
- nmi_enter();
- else
- irq_enter();
-
/* Read all the PMCs since we'll need them a bunch of times */
for (i = 0; i < ppmu->n_counter; ++i)
val[i] = read_pmc(i + 1);
@@ -2344,8 +2318,8 @@ static void __perf_event_interrupt(struct pt_regs *regs)
}
}
}
- if (!found && !nmi && printk_ratelimit())
- printk(KERN_WARNING "Can't find PMC that caused IRQ\n");
+ if (unlikely(!found) && !arch_irq_disabled_regs(regs))
+ printk_ratelimited(KERN_WARNING "Can't find PMC that caused IRQ\n");
/*
* Reset MMCR0 to its normal value. This will set PMXE and
@@ -2355,11 +2329,6 @@ static void __perf_event_interrupt(struct pt_regs *regs)
* we get back out of this interrupt.
*/
write_mmcr0(cpuhw, cpuhw->mmcr.mmcr0);
-
- if (nmi)
- nmi_exit();
- else
- irq_exit();
}
static void perf_event_interrupt(struct pt_regs *regs)
diff --git a/arch/powerpc/perf/core-fsl-emb.c b/arch/powerpc/perf/core-fsl-emb.c
index e0e7e276bfd2..ee721f420a7b 100644
--- a/arch/powerpc/perf/core-fsl-emb.c
+++ b/arch/powerpc/perf/core-fsl-emb.c
@@ -31,19 +31,6 @@ static atomic_t num_events;
/* Used to avoid races in calling reserve/release_pmc_hardware */
static DEFINE_MUTEX(pmc_reserve_mutex);
-/*
- * If interrupts were soft-disabled when a PMU interrupt occurs, treat
- * it as an NMI.
- */
-static inline int perf_intr_is_nmi(struct pt_regs *regs)
-{
-#ifdef __powerpc64__
- return (regs->softe & IRQS_DISABLED);
-#else
- return 0;
-#endif
-}
-
static void perf_event_interrupt(struct pt_regs *regs);
/*
@@ -659,13 +646,6 @@ static void perf_event_interrupt(struct pt_regs *regs)
struct perf_event *event;
unsigned long val;
int found = 0;
- int nmi;
-
- nmi = perf_intr_is_nmi(regs);
- if (nmi)
- nmi_enter();
- else
- irq_enter();
for (i = 0; i < ppmu->n_counter; ++i) {
event = cpuhw->event[i];
@@ -690,11 +670,6 @@ static void perf_event_interrupt(struct pt_regs *regs)
mtmsr(mfmsr() | MSR_PMM);
mtpmr(PMRN_PMGC0, PMGC0_PMIE | PMGC0_FCECE);
isync();
-
- if (nmi)
- nmi_exit();
- else
- irq_exit();
}
void hw_perf_event_setup(int cpu)
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 15/39] powerpc/time: move timer_broadcast_interrupt prototype to asm/time.h
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (13 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 14/39] powerpc/perf: move perf irq/nmi handling details into traps.c Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 16/39] powerpc: add and use unknown_async_exception Nicholas Piggin
` (23 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
Interrupt handler prototypes are going to be rearranged in a
future patch, so tidy this out of the way first.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/hw_irq.h | 1 -
arch/powerpc/include/asm/time.h | 2 ++
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 0363734ff56e..e5def36212cf 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -54,7 +54,6 @@ extern void replay_system_reset(void);
extern void replay_soft_interrupts(void);
extern void timer_interrupt(struct pt_regs *);
-extern void timer_broadcast_interrupt(void);
extern void performance_monitor_exception(struct pt_regs *regs);
extern void WatchdogException(struct pt_regs *regs);
extern void unknown_exception(struct pt_regs *regs);
diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
index 8f789b597bae..8dd3cdb25338 100644
--- a/arch/powerpc/include/asm/time.h
+++ b/arch/powerpc/include/asm/time.h
@@ -102,6 +102,8 @@ DECLARE_PER_CPU(u64, decrementers_next_tb);
/* Convert timebase ticks to nanoseconds */
unsigned long long tb_to_ns(unsigned long long tb_ticks);
+void timer_broadcast_interrupt(void);
+
/* SPLPAR */
void accumulate_stolen_time(void);
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 16/39] powerpc: add and use unknown_async_exception
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (14 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 15/39] powerpc/time: move timer_broadcast_interrupt prototype to asm/time.h Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 17/39] powerpc/fsl_booke/32: CacheLockingException remove args Nicholas Piggin
` (22 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
This is currently the same as unknown_exception, but it will diverge
after interrupt wrappers are added and code moved out of asm into the
wrappers (e.g., async handlers will check FINISH_NAP).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/hw_irq.h | 1 +
arch/powerpc/kernel/exceptions-64s.S | 4 ++--
arch/powerpc/kernel/head_book3s_32.S | 6 +++---
arch/powerpc/kernel/traps.c | 12 ++++++++++++
4 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index e5def36212cf..75c2b137fc00 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -57,6 +57,7 @@ extern void timer_interrupt(struct pt_regs *);
extern void performance_monitor_exception(struct pt_regs *regs);
extern void WatchdogException(struct pt_regs *regs);
extern void unknown_exception(struct pt_regs *regs);
+void unknown_async_exception(struct pt_regs *regs);
#ifdef CONFIG_PPC64
#include <asm/paca.h>
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index e69a912c2cc6..fe33197ea8fb 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1926,7 +1926,7 @@ EXC_COMMON_BEGIN(doorbell_super_common)
#ifdef CONFIG_PPC_DOORBELL
bl doorbell_exception
#else
- bl unknown_exception
+ bl unknown_async_exception
#endif
b interrupt_return
@@ -2312,7 +2312,7 @@ EXC_COMMON_BEGIN(h_doorbell_common)
#ifdef CONFIG_PPC_DOORBELL
bl doorbell_exception
#else
- bl unknown_exception
+ bl unknown_async_exception
#endif
b interrupt_return
diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
index 94ad1372c490..9b4d5432e2db 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -238,8 +238,8 @@ __secondary_hold_acknowledge:
/* System reset */
/* core99 pmac starts the seconary here by changing the vector, and
- putting it back to what it was (unknown_exception) when done. */
- EXCEPTION(0x100, Reset, unknown_exception, EXC_XFER_STD)
+ putting it back to what it was (unknown_async_exception) when done. */
+ EXCEPTION(0x100, Reset, unknown_async_exception, EXC_XFER_STD)
/* Machine check */
/*
@@ -631,7 +631,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_NEED_DTLB_SW_LRU)
#endif
#ifndef CONFIG_TAU_INT
-#define TAUException unknown_exception
+#define TAUException unknown_async_exception
#endif
EXCEPTION(0x1300, Trap_13, instruction_breakpoint_exception, EXC_XFER_STD)
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index bd55f201115b..639bcafbad5e 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -1073,6 +1073,18 @@ void unknown_exception(struct pt_regs *regs)
exception_exit(prev_state);
}
+void unknown_async_exception(struct pt_regs *regs)
+{
+ enum ctx_state prev_state = exception_enter();
+
+ printk("Bad trap at PC: %lx, SR: %lx, vector=%lx\n",
+ regs->nip, regs->msr, regs->trap);
+
+ _exception(SIGTRAP, regs, TRAP_UNK, 0);
+
+ exception_exit(prev_state);
+}
+
void instruction_breakpoint_exception(struct pt_regs *regs)
{
enum ctx_state prev_state = exception_enter();
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 17/39] powerpc/fsl_booke/32: CacheLockingException remove args
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (15 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 16/39] powerpc: add and use unknown_async_exception Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 17:14 ` Christophe Leroy
2021-01-15 16:49 ` [PATCH v6 18/39] powerpc: DebugException " Nicholas Piggin
` (21 subsequent siblings)
38 siblings, 1 reply; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
Like other interrupt handler conversions, switch to getting registers
from the pt_regs argument.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/head_fsl_booke.S | 6 +++---
arch/powerpc/kernel/traps.c | 5 +++--
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index fdd4d274c245..0d4d9a6fcca1 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -364,12 +364,12 @@ interrupt_base:
/* Data Storage Interrupt */
START_EXCEPTION(DataStorage)
NORMAL_EXCEPTION_PROLOG(DATA_STORAGE)
- mfspr r5,SPRN_ESR /* Grab the ESR, save it, pass arg3 */
+ mfspr r5,SPRN_ESR /* Grab the ESR, save it3 */
stw r5,_ESR(r11)
- mfspr r4,SPRN_DEAR /* Grab the DEAR, save it, pass arg2 */
+ mfspr r4,SPRN_DEAR /* Grab the DEAR, save it */
+ stw r4, _DEAR(r11)
andis. r10,r5,(ESR_ILK|ESR_DLK)@h
bne 1f
- stw r4, _DEAR(r11)
EXC_XFER_LITE(0x0300, handle_page_fault)
1:
addi r3,r1,STACK_FRAME_OVERHEAD
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 639bcafbad5e..1af52a4bce1f 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -2105,9 +2105,10 @@ void altivec_assist_exception(struct pt_regs *regs)
#endif /* CONFIG_ALTIVEC */
#ifdef CONFIG_FSL_BOOKE
-void CacheLockingException(struct pt_regs *regs, unsigned long address,
- unsigned long error_code)
+void CacheLockingException(struct pt_regs *regs)
{
+ unsigned long error_code = regs->dsisr;
+
/* We treat cache locking instructions from the user
* as priv ops, in the future we could try to do
* something smarter
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 18/39] powerpc: DebugException remove args
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (16 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 17/39] powerpc/fsl_booke/32: CacheLockingException remove args Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 19/39] powerpc/cell: tidy up pervasive declarations Nicholas Piggin
` (20 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
Like other interrupt handler conversions, switch to getting registers
from the pt_regs argument.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/exceptions-64e.S | 2 --
arch/powerpc/kernel/head_40x.S | 1 +
arch/powerpc/kernel/head_booke.h | 2 ++
arch/powerpc/kernel/traps.c | 4 +++-
4 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 52421042a020..003999c7836c 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -791,7 +791,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
EXCEPTION_COMMON_CRIT(0xd00)
std r14,_DSISR(r1)
addi r3,r1,STACK_FRAME_OVERHEAD
- mr r4,r14
ld r14,PACA_EXCRIT+EX_R14(r13)
ld r15,PACA_EXCRIT+EX_R15(r13)
bl save_nvgprs
@@ -864,7 +863,6 @@ kernel_dbg_exc:
INTS_DISABLE
std r14,_DSISR(r1)
addi r3,r1,STACK_FRAME_OVERHEAD
- mr r4,r14
ld r14,PACA_EXDBG+EX_R14(r13)
ld r15,PACA_EXDBG+EX_R15(r13)
bl save_nvgprs
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 3c5577ac4dc8..24724a7dad49 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -476,6 +476,7 @@ _ENTRY(saved_ksp_limit)
/* continue normal handling for a critical exception... */
2: mfspr r4,SPRN_DBSR
+ stw r4,_ESR(r11) /* DebugException takes DBSR in _ESR */
addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_TEMPLATE(DebugException, 0x2002, \
(MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index 0fbdacc7fab7..bf33af714d11 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -406,6 +406,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
\
/* continue normal handling for a debug exception... */ \
2: mfspr r4,SPRN_DBSR; \
+ stw r4,_ESR(r11); /* DebugException takes DBSR in _ESR */\
addi r3,r1,STACK_FRAME_OVERHEAD; \
EXC_XFER_TEMPLATE(DebugException, 0x2008, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), debug_transfer_to_handler, ret_from_debug_exc)
@@ -459,6 +460,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
\
/* continue normal handling for a critical exception... */ \
2: mfspr r4,SPRN_DBSR; \
+ stw r4,_ESR(r11); /* DebugException takes DBSR in _ESR */\
addi r3,r1,STACK_FRAME_OVERHEAD; \
EXC_XFER_TEMPLATE(DebugException, 0x2002, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), crit_transfer_to_handler, ret_from_crit_exc)
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 1af52a4bce1f..6691774fe1fb 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -2000,8 +2000,10 @@ static void handle_debug(struct pt_regs *regs, unsigned long debug_status)
mtspr(SPRN_DBCR0, current->thread.debug.dbcr0);
}
-void DebugException(struct pt_regs *regs, unsigned long debug_status)
+void DebugException(struct pt_regs *regs)
{
+ unsigned long debug_status = regs->dsisr;
+
current->thread.debug.dbsr = debug_status;
/* Hack alert: On BookE, Branch Taken stops on the branch itself, while
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 19/39] powerpc/cell: tidy up pervasive declarations
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (17 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 18/39] powerpc: DebugException " Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 20/39] powerpc: introduce die_mce Nicholas Piggin
` (19 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
These are declared in ras.h and defined in ras.c so remove them from
pervasive.h
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/platforms/cell/pervasive.c | 1 +
arch/powerpc/platforms/cell/pervasive.h | 3 ---
2 files changed, 1 insertion(+), 3 deletions(-)
diff --git a/arch/powerpc/platforms/cell/pervasive.c b/arch/powerpc/platforms/cell/pervasive.c
index 9068edef71f7..5b9a7e9f144b 100644
--- a/arch/powerpc/platforms/cell/pervasive.c
+++ b/arch/powerpc/platforms/cell/pervasive.c
@@ -25,6 +25,7 @@
#include <asm/cpu_has_feature.h>
#include "pervasive.h"
+#include "ras.h"
static void cbe_power_save(void)
{
diff --git a/arch/powerpc/platforms/cell/pervasive.h b/arch/powerpc/platforms/cell/pervasive.h
index c6fccad6caee..0da74ab10716 100644
--- a/arch/powerpc/platforms/cell/pervasive.h
+++ b/arch/powerpc/platforms/cell/pervasive.h
@@ -13,9 +13,6 @@
#define PERVASIVE_H
extern void cbe_pervasive_init(void);
-extern void cbe_system_error_exception(struct pt_regs *regs);
-extern void cbe_maintenance_exception(struct pt_regs *regs);
-extern void cbe_thermal_exception(struct pt_regs *regs);
#ifdef CONFIG_PPC_IBM_CELL_RESETBUTTON
extern int cbe_sysreset_hack(void);
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 20/39] powerpc: introduce die_mce
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (18 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 19/39] powerpc/cell: tidy up pervasive declarations Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 21/39] powerpc/mce: ensure machine check handler always tests RI Nicholas Piggin
` (18 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
As explained by commit daf00ae71dad ("powerpc/traps: restore
recoverability of machine_check interrupts"), die() can't be called from
within nmi_enter to nicely kill a process context that was interrupted.
nmi_exit must be called first.
This adds a function die_mce which takes care of this for machine check
handlers.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/bug.h | 1 +
arch/powerpc/kernel/traps.c | 21 +++++++++++++++------
arch/powerpc/platforms/powernv/opal.c | 2 +-
arch/powerpc/platforms/pseries/ras.c | 2 +-
4 files changed, 18 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/include/asm/bug.h b/arch/powerpc/include/asm/bug.h
index 8f09ddae9305..c10ae0a9bbaf 100644
--- a/arch/powerpc/include/asm/bug.h
+++ b/arch/powerpc/include/asm/bug.h
@@ -118,6 +118,7 @@ void do_bad_page_fault_segv(struct pt_regs *regs);
extern void _exception(int, struct pt_regs *, int, unsigned long);
extern void _exception_pkey(struct pt_regs *, unsigned long, int);
extern void die(const char *, struct pt_regs *, long);
+void die_mce(const char *str, struct pt_regs *regs, long err);
extern bool die_will_crash(void);
extern void panic_flush_kmsg_start(void);
extern void panic_flush_kmsg_end(void);
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 6691774fe1fb..f9ef183a5454 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -789,6 +789,19 @@ int machine_check_generic(struct pt_regs *regs)
}
#endif /* everything else */
+void die_mce(const char *str, struct pt_regs *regs, long err)
+{
+ /*
+ * The machine check wants to kill the interrupted context, but
+ * do_exit() checks for in_interrupt() and panics in that case, so
+ * exit the irq/nmi before calling die.
+ */
+ if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64))
+ nmi_exit();
+ die(str, regs, err);
+}
+NOKPROBE_SYMBOL(die_mce);
+
void machine_check_exception(struct pt_regs *regs)
{
int recover = 0;
@@ -831,15 +844,11 @@ void machine_check_exception(struct pt_regs *regs)
if (check_io_access(regs))
goto bail;
- if (nmi) nmi_exit();
-
- die("Machine check", regs, SIGBUS);
+ die_mce("Machine check", regs, SIGBUS);
/* Must die if the interrupt is not recoverable */
if (!(regs->msr & MSR_RI))
- die("Unrecoverable Machine check", regs, SIGBUS);
-
- return;
+ die_mce("Unrecoverable Machine check", regs, SIGBUS);
bail:
if (nmi) nmi_exit();
diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
index c61c3b62c8c6..303d7c775740 100644
--- a/arch/powerpc/platforms/powernv/opal.c
+++ b/arch/powerpc/platforms/powernv/opal.c
@@ -624,7 +624,7 @@ static int opal_recover_mce(struct pt_regs *regs,
*/
recovered = 0;
} else {
- die("Machine check", regs, SIGBUS);
+ die_mce("Machine check", regs, SIGBUS);
recovered = 1;
}
}
diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
index 149cec2212e6..2d9f985fd13a 100644
--- a/arch/powerpc/platforms/pseries/ras.c
+++ b/arch/powerpc/platforms/pseries/ras.c
@@ -813,7 +813,7 @@ static int recover_mce(struct pt_regs *regs, struct machine_check_event *evt)
*/
recovered = 0;
} else {
- die("Machine check", regs, SIGBUS);
+ die_mce("Machine check", regs, SIGBUS);
recovered = 1;
}
}
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 21/39] powerpc/mce: ensure machine check handler always tests RI
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (19 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 20/39] powerpc: introduce die_mce Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 22/39] powerpc: improve handling of unrecoverable system reset Nicholas Piggin
` (17 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
A machine check that is handled must still check MSR[RI] for
recoverability of the interrupted context. Without this patch
it's possible for a handled machine check to return to a
context where it has clobbered live registers.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/traps.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index f9ef183a5454..3a8699995a77 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -846,11 +846,11 @@ void machine_check_exception(struct pt_regs *regs)
die_mce("Machine check", regs, SIGBUS);
+bail:
/* Must die if the interrupt is not recoverable */
if (!(regs->msr & MSR_RI))
die_mce("Unrecoverable Machine check", regs, SIGBUS);
-bail:
if (nmi) nmi_exit();
}
NOKPROBE_SYMBOL(machine_check_exception);
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 22/39] powerpc: improve handling of unrecoverable system reset
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (20 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 21/39] powerpc/mce: ensure machine check handler always tests RI Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 23/39] powerpc: interrupt handler wrapper functions Nicholas Piggin
` (16 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
If an unrecoverable system reset hits in process context, the system
does not have to panic. Similar to machine check, call nmi_exit()
before die().
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/traps.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 3a8699995a77..f70d3f6174c8 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -503,8 +503,11 @@ void system_reset_exception(struct pt_regs *regs)
die("Unrecoverable nested System Reset", regs, SIGABRT);
#endif
/* Must die if the interrupt is not recoverable */
- if (!(regs->msr & MSR_RI))
+ if (!(regs->msr & MSR_RI)) {
+ /* For the reason explained in die_mce, nmi_exit before die */
+ nmi_exit();
die("Unrecoverable System Reset", regs, SIGABRT);
+ }
if (saved_hsrrs) {
mtspr(SPRN_HSRR0, hsrr0);
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 23/39] powerpc: interrupt handler wrapper functions
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (21 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 22/39] powerpc: improve handling of unrecoverable system reset Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 24/39] powerpc: add interrupt wrapper entry / exit stub functions Nicholas Piggin
` (15 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
Add wrapper functions (derived from x86 macros) for interrupt handler
functions. This allows interrupt entry code to be written in C.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/interrupt.h | 169 +++++++++++++++++++++++++++
1 file changed, 169 insertions(+)
create mode 100644 arch/powerpc/include/asm/interrupt.h
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
new file mode 100644
index 000000000000..5e2526aacc52
--- /dev/null
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -0,0 +1,169 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _ASM_POWERPC_INTERRUPT_H
+#define _ASM_POWERPC_INTERRUPT_H
+
+#include <linux/context_tracking.h>
+#include <asm/ftrace.h>
+
+/**
+ * DECLARE_INTERRUPT_HANDLER_RAW - Declare raw interrupt handler function
+ * @func: Function name of the entry point
+ * @returns: Returns a value back to asm caller
+ */
+#define DECLARE_INTERRUPT_HANDLER_RAW(func) \
+ __visible long func(struct pt_regs *regs)
+
+/**
+ * DEFINE_INTERRUPT_HANDLER_RAW - Define raw interrupt handler function
+ * @func: Function name of the entry point
+ * @returns: Returns a value back to asm caller
+ *
+ * @func is called from ASM entry code.
+ *
+ * This is a plain function which does no tracing, reconciling, etc.
+ * The macro is written so it acts as function definition. Append the
+ * body with a pair of curly brackets.
+ *
+ * raw interrupt handlers must not enable or disable interrupts, or
+ * schedule, tracing and instrumentation (ftrace, lockdep, etc) would
+ * not be advisable either, although may be possible in a pinch, the
+ * trace will look odd at least.
+ *
+ * A raw handler may call one of the other interrupt handler functions
+ * to be converted into that interrupt context without these restrictions.
+ *
+ * On PPC64, _RAW handlers may return with fast_interrupt_return.
+ *
+ * Specific handlers may have additional restrictions.
+ */
+#define DEFINE_INTERRUPT_HANDLER_RAW(func) \
+static __always_inline long ____##func(struct pt_regs *regs); \
+ \
+__visible noinstr long func(struct pt_regs *regs) \
+{ \
+ long ret; \
+ \
+ ret = ____##func (regs); \
+ \
+ return ret; \
+} \
+ \
+static __always_inline long ____##func(struct pt_regs *regs)
+
+/**
+ * DECLARE_INTERRUPT_HANDLER - Declare synchronous interrupt handler function
+ * @func: Function name of the entry point
+ */
+#define DECLARE_INTERRUPT_HANDLER(func) \
+ __visible void func(struct pt_regs *regs)
+
+/**
+ * DEFINE_INTERRUPT_HANDLER - Define synchronous interrupt handler function
+ * @func: Function name of the entry point
+ *
+ * @func is called from ASM entry code.
+ *
+ * The macro is written so it acts as function definition. Append the
+ * body with a pair of curly brackets.
+ */
+#define DEFINE_INTERRUPT_HANDLER(func) \
+static __always_inline void ____##func(struct pt_regs *regs); \
+ \
+__visible noinstr void func(struct pt_regs *regs) \
+{ \
+ ____##func (regs); \
+} \
+ \
+static __always_inline void ____##func(struct pt_regs *regs)
+
+/**
+ * DECLARE_INTERRUPT_HANDLER_RET - Declare synchronous interrupt handler function
+ * @func: Function name of the entry point
+ * @returns: Returns a value back to asm caller
+ */
+#define DECLARE_INTERRUPT_HANDLER_RET(func) \
+ __visible long func(struct pt_regs *regs)
+
+/**
+ * DEFINE_INTERRUPT_HANDLER_RET - Define synchronous interrupt handler function
+ * @func: Function name of the entry point
+ * @returns: Returns a value back to asm caller
+ *
+ * @func is called from ASM entry code.
+ *
+ * The macro is written so it acts as function definition. Append the
+ * body with a pair of curly brackets.
+ */
+#define DEFINE_INTERRUPT_HANDLER_RET(func) \
+static __always_inline long ____##func(struct pt_regs *regs); \
+ \
+__visible noinstr long func(struct pt_regs *regs) \
+{ \
+ long ret; \
+ \
+ ret = ____##func (regs); \
+ \
+ return ret; \
+} \
+ \
+static __always_inline long ____##func(struct pt_regs *regs)
+
+/**
+ * DECLARE_INTERRUPT_HANDLER_ASYNC - Declare asynchronous interrupt handler function
+ * @func: Function name of the entry point
+ */
+#define DECLARE_INTERRUPT_HANDLER_ASYNC(func) \
+ __visible void func(struct pt_regs *regs)
+
+/**
+ * DEFINE_INTERRUPT_HANDLER_ASYNC - Define asynchronous interrupt handler function
+ * @func: Function name of the entry point
+ *
+ * @func is called from ASM entry code.
+ *
+ * The macro is written so it acts as function definition. Append the
+ * body with a pair of curly brackets.
+ */
+#define DEFINE_INTERRUPT_HANDLER_ASYNC(func) \
+static __always_inline void ____##func(struct pt_regs *regs); \
+ \
+__visible noinstr void func(struct pt_regs *regs) \
+{ \
+ ____##func (regs); \
+} \
+ \
+static __always_inline void ____##func(struct pt_regs *regs)
+
+/**
+ * DECLARE_INTERRUPT_HANDLER_NMI - Declare NMI interrupt handler function
+ * @func: Function name of the entry point
+ * @returns: Returns a value back to asm caller
+ */
+#define DECLARE_INTERRUPT_HANDLER_NMI(func) \
+ __visible long func(struct pt_regs *regs)
+
+/**
+ * DEFINE_INTERRUPT_HANDLER_NMI - Define NMI interrupt handler function
+ * @func: Function name of the entry point
+ * @returns: Returns a value back to asm caller
+ *
+ * @func is called from ASM entry code.
+ *
+ * The macro is written so it acts as function definition. Append the
+ * body with a pair of curly brackets.
+ */
+#define DEFINE_INTERRUPT_HANDLER_NMI(func) \
+static __always_inline long ____##func(struct pt_regs *regs); \
+ \
+__visible noinstr long func(struct pt_regs *regs) \
+{ \
+ long ret; \
+ \
+ ret = ____##func (regs); \
+ \
+ return ret; \
+} \
+ \
+static __always_inline long ____##func(struct pt_regs *regs)
+
+#endif /* _ASM_POWERPC_INTERRUPT_H */
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 24/39] powerpc: add interrupt wrapper entry / exit stub functions
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (22 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 23/39] powerpc: interrupt handler wrapper functions Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 25/39] powerpc: convert interrupt handlers to use wrappers Nicholas Piggin
` (14 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
These will be used by subsequent patches.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/interrupt.h | 66 ++++++++++++++++++++++++++++
1 file changed, 66 insertions(+)
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 5e2526aacc52..3df1921cfda3 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -5,6 +5,50 @@
#include <linux/context_tracking.h>
#include <asm/ftrace.h>
+struct interrupt_state {
+};
+
+static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrupt_state *state)
+{
+}
+
+/*
+ * Care should be taken to note that interrupt_exit_prepare and
+ * interrupt_async_exit_prepare do not necessarily return immediately to
+ * regs context (e.g., if regs is usermode, we don't necessarily return to
+ * user mode). Other interrupts might be taken between here and return,
+ * context switch / preemption may occur in the exit path after this, or a
+ * signal may be delivered, etc.
+ *
+ * The real interrupt exit code is platform specific, e.g.,
+ * interrupt_exit_user_prepare / interrupt_exit_kernel_prepare for 64s.
+ *
+ * However interrupt_nmi_exit_prepare does return directly to regs, because
+ * NMIs do not do "exit work" or replay soft-masked interrupts.
+ */
+static inline void interrupt_exit_prepare(struct pt_regs *regs, struct interrupt_state *state)
+{
+}
+
+static inline void interrupt_async_enter_prepare(struct pt_regs *regs, struct interrupt_state *state)
+{
+}
+
+static inline void interrupt_async_exit_prepare(struct pt_regs *regs, struct interrupt_state *state)
+{
+}
+
+struct interrupt_nmi_state {
+};
+
+static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct interrupt_nmi_state *state)
+{
+}
+
+static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct interrupt_nmi_state *state)
+{
+}
+
/**
* DECLARE_INTERRUPT_HANDLER_RAW - Declare raw interrupt handler function
* @func: Function name of the entry point
@@ -71,7 +115,13 @@ static __always_inline void ____##func(struct pt_regs *regs); \
\
__visible noinstr void func(struct pt_regs *regs) \
{ \
+ struct interrupt_state state; \
+ \
+ interrupt_enter_prepare(regs, &state); \
+ \
____##func (regs); \
+ \
+ interrupt_exit_prepare(regs, &state); \
} \
\
static __always_inline void ____##func(struct pt_regs *regs)
@@ -99,10 +149,15 @@ static __always_inline long ____##func(struct pt_regs *regs); \
\
__visible noinstr long func(struct pt_regs *regs) \
{ \
+ struct interrupt_state state; \
long ret; \
\
+ interrupt_enter_prepare(regs, &state); \
+ \
ret = ____##func (regs); \
\
+ interrupt_exit_prepare(regs, &state); \
+ \
return ret; \
} \
\
@@ -129,7 +184,13 @@ static __always_inline void ____##func(struct pt_regs *regs); \
\
__visible noinstr void func(struct pt_regs *regs) \
{ \
+ struct interrupt_state state; \
+ \
+ interrupt_async_enter_prepare(regs, &state); \
+ \
____##func (regs); \
+ \
+ interrupt_async_exit_prepare(regs, &state); \
} \
\
static __always_inline void ____##func(struct pt_regs *regs)
@@ -157,10 +218,15 @@ static __always_inline long ____##func(struct pt_regs *regs); \
\
__visible noinstr long func(struct pt_regs *regs) \
{ \
+ struct interrupt_nmi_state state; \
long ret; \
\
+ interrupt_nmi_enter_prepare(regs, &state); \
+ \
ret = ____##func (regs); \
\
+ interrupt_nmi_exit_prepare(regs, &state); \
+ \
return ret; \
} \
\
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 25/39] powerpc: convert interrupt handlers to use wrappers
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (23 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 24/39] powerpc: add interrupt wrapper entry / exit stub functions Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-20 10:48 ` kernel test robot
2021-01-20 11:45 ` kernel test robot
2021-01-15 16:49 ` [PATCH v6 26/39] powerpc: add interrupt_cond_local_irq_enable helper Nicholas Piggin
` (13 subsequent siblings)
38 siblings, 2 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/asm-prototypes.h | 29 -------
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 1 -
arch/powerpc/include/asm/debug.h | 3 -
arch/powerpc/include/asm/hw_irq.h | 9 --
arch/powerpc/include/asm/interrupt.h | 66 +++++++++++++++
arch/powerpc/kernel/dbell.c | 6 +-
arch/powerpc/kernel/irq.c | 3 +-
arch/powerpc/kernel/mce.c | 5 +-
arch/powerpc/kernel/process.c | 3 +-
arch/powerpc/kernel/syscall_64.c | 1 +
arch/powerpc/kernel/tau_6xx.c | 2 +-
arch/powerpc/kernel/time.c | 3 +-
arch/powerpc/kernel/traps.c | 84 +++++++++++--------
arch/powerpc/kernel/watchdog.c | 7 +-
arch/powerpc/kvm/book3s_hv.c | 1 +
arch/powerpc/kvm/book3s_hv_builtin.c | 1 +
arch/powerpc/kvm/booke.c | 1 +
arch/powerpc/mm/book3s64/hash_utils.c | 11 ++-
arch/powerpc/mm/book3s64/slb.c | 7 +-
arch/powerpc/mm/fault.c | 5 +-
arch/powerpc/platforms/cell/ras.c | 6 +-
arch/powerpc/platforms/cell/ras.h | 9 +-
arch/powerpc/platforms/powernv/idle.c | 1 +
23 files changed, 164 insertions(+), 100 deletions(-)
diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
index 22c9d08fa3a4..939f3c94c8f3 100644
--- a/arch/powerpc/include/asm/asm-prototypes.h
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -56,35 +56,6 @@ int exit_vmx_usercopy(void);
int enter_vmx_ops(void);
void *exit_vmx_ops(void *dest);
-/* Traps */
-long machine_check_early(struct pt_regs *regs);
-long hmi_exception_realmode(struct pt_regs *regs);
-void SMIException(struct pt_regs *regs);
-void handle_hmi_exception(struct pt_regs *regs);
-void instruction_breakpoint_exception(struct pt_regs *regs);
-void RunModeException(struct pt_regs *regs);
-void single_step_exception(struct pt_regs *regs);
-void program_check_exception(struct pt_regs *regs);
-void alignment_exception(struct pt_regs *regs);
-void StackOverflow(struct pt_regs *regs);
-void stack_overflow_exception(struct pt_regs *regs);
-void kernel_fp_unavailable_exception(struct pt_regs *regs);
-void altivec_unavailable_exception(struct pt_regs *regs);
-void vsx_unavailable_exception(struct pt_regs *regs);
-void fp_unavailable_tm(struct pt_regs *regs);
-void altivec_unavailable_tm(struct pt_regs *regs);
-void vsx_unavailable_tm(struct pt_regs *regs);
-void facility_unavailable_exception(struct pt_regs *regs);
-void TAUException(struct pt_regs *regs);
-void altivec_assist_exception(struct pt_regs *regs);
-void unrecoverable_exception(struct pt_regs *regs);
-void kernel_bad_stack(struct pt_regs *regs);
-void system_reset_exception(struct pt_regs *regs);
-void machine_check_exception(struct pt_regs *regs);
-void emulation_assist_interrupt(struct pt_regs *regs);
-long do_slb_fault(struct pt_regs *regs);
-void do_bad_slb_fault(struct pt_regs *regs);
-
/* signals, syscalls and interrupts */
long sys_swapcontext(struct ucontext __user *old_ctx,
struct ucontext __user *new_ctx,
diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index b9968e297da2..066b1d34c7bc 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -454,7 +454,6 @@ static inline unsigned long hpt_hash(unsigned long vpn,
#define HPTE_NOHPTE_UPDATE 0x2
#define HPTE_USE_KERNEL_KEY 0x4
-long do_hash_fault(struct pt_regs *regs);
extern int __hash_page_4K(unsigned long ea, unsigned long access,
unsigned long vsid, pte_t *ptep, unsigned long trap,
unsigned long flags, int ssize, int subpage_prot);
diff --git a/arch/powerpc/include/asm/debug.h b/arch/powerpc/include/asm/debug.h
index 0550eceab3ca..86a14736c76c 100644
--- a/arch/powerpc/include/asm/debug.h
+++ b/arch/powerpc/include/asm/debug.h
@@ -50,9 +50,6 @@ bool ppc_breakpoint_available(void);
#ifdef CONFIG_PPC_ADV_DEBUG_REGS
extern void do_send_trap(struct pt_regs *regs, unsigned long address,
unsigned long error_code, int brkpt);
-#else
-
-void do_break(struct pt_regs *regs);
#endif
#endif /* _ASM_POWERPC_DEBUG_H */
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 75c2b137fc00..614957f74cee 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -50,15 +50,6 @@
#ifndef __ASSEMBLY__
-extern void replay_system_reset(void);
-extern void replay_soft_interrupts(void);
-
-extern void timer_interrupt(struct pt_regs *);
-extern void performance_monitor_exception(struct pt_regs *regs);
-extern void WatchdogException(struct pt_regs *regs);
-extern void unknown_exception(struct pt_regs *regs);
-void unknown_async_exception(struct pt_regs *regs);
-
#ifdef CONFIG_PPC64
#include <asm/paca.h>
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 3df1921cfda3..4ffbd3d75324 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -232,4 +232,70 @@ __visible noinstr long func(struct pt_regs *regs) \
\
static __always_inline long ____##func(struct pt_regs *regs)
+
+/* Interrupt handlers */
+/* kernel/traps.c */
+DECLARE_INTERRUPT_HANDLER_NMI(system_reset_exception);
+#ifdef CONFIG_PPC_BOOK3S_64
+DECLARE_INTERRUPT_HANDLER_ASYNC(machine_check_exception);
+#else
+DECLARE_INTERRUPT_HANDLER_NMI(machine_check_exception);
+#endif
+DECLARE_INTERRUPT_HANDLER(SMIException);
+DECLARE_INTERRUPT_HANDLER(handle_hmi_exception);
+DECLARE_INTERRUPT_HANDLER(unknown_exception);
+DECLARE_INTERRUPT_HANDLER_ASYNC(unknown_async_exception);
+DECLARE_INTERRUPT_HANDLER(instruction_breakpoint_exception);
+DECLARE_INTERRUPT_HANDLER(RunModeException);
+DECLARE_INTERRUPT_HANDLER(single_step_exception);
+DECLARE_INTERRUPT_HANDLER(program_check_exception);
+DECLARE_INTERRUPT_HANDLER(emulation_assist_interrupt);
+DECLARE_INTERRUPT_HANDLER(alignment_exception);
+DECLARE_INTERRUPT_HANDLER(StackOverflow);
+DECLARE_INTERRUPT_HANDLER(stack_overflow_exception);
+DECLARE_INTERRUPT_HANDLER(kernel_fp_unavailable_exception);
+DECLARE_INTERRUPT_HANDLER(altivec_unavailable_exception);
+DECLARE_INTERRUPT_HANDLER(vsx_unavailable_exception);
+DECLARE_INTERRUPT_HANDLER(facility_unavailable_exception);
+DECLARE_INTERRUPT_HANDLER(fp_unavailable_tm);
+DECLARE_INTERRUPT_HANDLER(altivec_unavailable_tm);
+DECLARE_INTERRUPT_HANDLER(vsx_unavailable_tm);
+DECLARE_INTERRUPT_HANDLER_NMI(performance_monitor_exception_nmi);
+DECLARE_INTERRUPT_HANDLER_ASYNC(performance_monitor_exception_async);
+DECLARE_INTERRUPT_HANDLER_RAW(performance_monitor_exception);
+DECLARE_INTERRUPT_HANDLER(DebugException);
+DECLARE_INTERRUPT_HANDLER(altivec_assist_exception);
+DECLARE_INTERRUPT_HANDLER(CacheLockingException);
+DECLARE_INTERRUPT_HANDLER(SPEFloatingPointException);
+DECLARE_INTERRUPT_HANDLER(SPEFloatingPointRoundException);
+DECLARE_INTERRUPT_HANDLER(unrecoverable_exception);
+DECLARE_INTERRUPT_HANDLER(WatchdogException);
+DECLARE_INTERRUPT_HANDLER(kernel_bad_stack);
+
+/* slb.c */
+DECLARE_INTERRUPT_HANDLER_RAW(do_slb_fault);
+DECLARE_INTERRUPT_HANDLER(do_bad_slb_fault);
+
+/* hash_utils.c */
+DECLARE_INTERRUPT_HANDLER_RAW(do_hash_fault);
+
+/* fault.c */
+DECLARE_INTERRUPT_HANDLER_RET(do_page_fault);
+DECLARE_INTERRUPT_HANDLER(do_bad_page_fault_segv);
+
+/* process.c */
+DECLARE_INTERRUPT_HANDLER(do_break);
+
+/* time.c */
+DECLARE_INTERRUPT_HANDLER_ASYNC(timer_interrupt);
+
+/* mce.c */
+DECLARE_INTERRUPT_HANDLER_NMI(machine_check_early);
+DECLARE_INTERRUPT_HANDLER_NMI(hmi_exception_realmode);
+
+DECLARE_INTERRUPT_HANDLER_ASYNC(TAUException);
+
+void replay_system_reset(void);
+void replay_soft_interrupts(void);
+
#endif /* _ASM_POWERPC_INTERRUPT_H */
diff --git a/arch/powerpc/kernel/dbell.c b/arch/powerpc/kernel/dbell.c
index 52680cf07c9d..6a7ecfca5c3b 100644
--- a/arch/powerpc/kernel/dbell.c
+++ b/arch/powerpc/kernel/dbell.c
@@ -12,13 +12,14 @@
#include <linux/hardirq.h>
#include <asm/dbell.h>
+#include <asm/interrupt.h>
#include <asm/irq_regs.h>
#include <asm/kvm_ppc.h>
#include <asm/trace.h>
#ifdef CONFIG_SMP
-void doorbell_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER_ASYNC(doorbell_exception)
{
struct pt_regs *old_regs = set_irq_regs(regs);
@@ -39,9 +40,8 @@ void doorbell_exception(struct pt_regs *regs)
set_irq_regs(old_regs);
}
#else /* CONFIG_SMP */
-void doorbell_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER_ASYNC(doorbell_exception)
{
printk(KERN_WARNING "Received doorbell on non-smp system\n");
}
#endif /* CONFIG_SMP */
-
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 6b1eca53e36c..2055d204d08e 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -54,6 +54,7 @@
#include <linux/pgtable.h>
#include <linux/uaccess.h>
+#include <asm/interrupt.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/cache.h>
@@ -665,7 +666,7 @@ void __do_irq(struct pt_regs *regs)
irq_exit();
}
-void do_IRQ(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER_ASYNC(do_IRQ)
{
struct pt_regs *old_regs = set_irq_regs(regs);
void *cursp, *irqsp, *sirqsp;
diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c
index 9f3e133b57b7..54269947113d 100644
--- a/arch/powerpc/kernel/mce.c
+++ b/arch/powerpc/kernel/mce.c
@@ -18,6 +18,7 @@
#include <linux/extable.h>
#include <linux/ftrace.h>
+#include <asm/interrupt.h>
#include <asm/machdep.h>
#include <asm/mce.h>
#include <asm/nmi.h>
@@ -588,7 +589,7 @@ EXPORT_SYMBOL_GPL(machine_check_print_event_info);
*
* regs->nip and regs->msr contains srr0 and ssr1.
*/
-long notrace machine_check_early(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER_NMI(machine_check_early)
{
long handled = 0;
u8 ftrace_enabled = this_cpu_get_ftrace_enabled();
@@ -722,7 +723,7 @@ long hmi_handle_debugtrig(struct pt_regs *regs)
/*
* Return values:
*/
-long hmi_exception_realmode(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER_NMI(hmi_exception_realmode)
{
int ret;
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 4f0f81e9420b..8520ed5ae144 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -41,6 +41,7 @@
#include <linux/pkeys.h>
#include <linux/seq_buf.h>
+#include <asm/interrupt.h>
#include <asm/io.h>
#include <asm/processor.h>
#include <asm/mmu.h>
@@ -659,7 +660,7 @@ static void do_break_handler(struct pt_regs *regs)
}
}
-void do_break(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(do_break)
{
current->thread.trap_nr = TRAP_HWBKPT;
if (notify_die(DIE_DABR_MATCH, "dabr_match", regs, regs->dsisr,
diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
index 7c85ed04a164..dd87b2118620 100644
--- a/arch/powerpc/kernel/syscall_64.c
+++ b/arch/powerpc/kernel/syscall_64.c
@@ -5,6 +5,7 @@
#include <asm/kup.h>
#include <asm/cputime.h>
#include <asm/hw_irq.h>
+#include <asm/interrupt.h>
#include <asm/kprobes.h>
#include <asm/paca.h>
#include <asm/ptrace.h>
diff --git a/arch/powerpc/kernel/tau_6xx.c b/arch/powerpc/kernel/tau_6xx.c
index 0b4694b8d248..46b2e5de4ef5 100644
--- a/arch/powerpc/kernel/tau_6xx.c
+++ b/arch/powerpc/kernel/tau_6xx.c
@@ -100,7 +100,7 @@ static void TAUupdate(int cpu)
* with interrupts disabled
*/
-void TAUException(struct pt_regs * regs)
+DEFINE_INTERRUPT_HANDLER_ASYNC(TAUException)
{
int cpu = smp_processor_id();
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 67feb3524460..435a251247ed 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -56,6 +56,7 @@
#include <linux/processor.h>
#include <asm/trace.h>
+#include <asm/interrupt.h>
#include <asm/io.h>
#include <asm/nvram.h>
#include <asm/cache.h>
@@ -570,7 +571,7 @@ void arch_irq_work_raise(void)
* timer_interrupt - gets called when the decrementer overflows,
* with interrupts disabled.
*/
-void timer_interrupt(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER_ASYNC(timer_interrupt)
{
struct clock_event_device *evt = this_cpu_ptr(&decrementers);
u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index f70d3f6174c8..b000956cde3f 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -41,6 +41,7 @@
#include <asm/emulated_ops.h>
#include <linux/uaccess.h>
#include <asm/debugfs.h>
+#include <asm/interrupt.h>
#include <asm/io.h>
#include <asm/machdep.h>
#include <asm/rtas.h>
@@ -430,8 +431,7 @@ void hv_nmi_check_nonrecoverable(struct pt_regs *regs)
regs->msr &= ~MSR_RI;
#endif
}
-
-void system_reset_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER_NMI(system_reset_exception)
{
unsigned long hsrr0, hsrr1;
bool saved_hsrrs = false;
@@ -519,6 +519,8 @@ void system_reset_exception(struct pt_regs *regs)
this_cpu_set_ftrace_enabled(ftrace_enabled);
/* What should we do here? We could issue a shutdown or hard reset. */
+
+ return 0;
}
NOKPROBE_SYMBOL(system_reset_exception);
@@ -805,7 +807,11 @@ void die_mce(const char *str, struct pt_regs *regs, long err)
}
NOKPROBE_SYMBOL(die_mce);
-void machine_check_exception(struct pt_regs *regs)
+#ifdef CONFIG_PPC_BOOK3S_64
+DEFINE_INTERRUPT_HANDLER_ASYNC(machine_check_exception)
+#else
+DEFINE_INTERRUPT_HANDLER_NMI(machine_check_exception)
+#endif
{
int recover = 0;
@@ -855,10 +861,16 @@ void machine_check_exception(struct pt_regs *regs)
die_mce("Unrecoverable Machine check", regs, SIGBUS);
if (nmi) nmi_exit();
+
+#ifdef CONFIG_PPC_BOOK3S_64
+ return;
+#else
+ return 0;
+#endif
}
NOKPROBE_SYMBOL(machine_check_exception);
-void SMIException(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(SMIException) /* async? */
{
die("System Management Interrupt", regs, SIGABRT);
}
@@ -1044,7 +1056,7 @@ static void p9_hmi_special_emu(struct pt_regs *regs)
}
#endif /* CONFIG_VSX */
-void handle_hmi_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER_ASYNC(handle_hmi_exception)
{
struct pt_regs *old_regs;
@@ -1073,7 +1085,7 @@ void handle_hmi_exception(struct pt_regs *regs)
set_irq_regs(old_regs);
}
-void unknown_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(unknown_exception)
{
enum ctx_state prev_state = exception_enter();
@@ -1085,7 +1097,7 @@ void unknown_exception(struct pt_regs *regs)
exception_exit(prev_state);
}
-void unknown_async_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER_ASYNC(unknown_async_exception)
{
enum ctx_state prev_state = exception_enter();
@@ -1097,7 +1109,7 @@ void unknown_async_exception(struct pt_regs *regs)
exception_exit(prev_state);
}
-void instruction_breakpoint_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(instruction_breakpoint_exception)
{
enum ctx_state prev_state = exception_enter();
@@ -1112,12 +1124,12 @@ void instruction_breakpoint_exception(struct pt_regs *regs)
exception_exit(prev_state);
}
-void RunModeException(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(RunModeException)
{
_exception(SIGTRAP, regs, TRAP_UNK, 0);
}
-void single_step_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(single_step_exception)
{
enum ctx_state prev_state = exception_enter();
@@ -1462,7 +1474,7 @@ static int emulate_math(struct pt_regs *regs)
static inline int emulate_math(struct pt_regs *regs) { return -1; }
#endif
-void program_check_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(program_check_exception)
{
enum ctx_state prev_state = exception_enter();
unsigned int reason = get_reason(regs);
@@ -1587,14 +1599,14 @@ NOKPROBE_SYMBOL(program_check_exception);
* This occurs when running in hypervisor mode on POWER6 or later
* and an illegal instruction is encountered.
*/
-void emulation_assist_interrupt(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(emulation_assist_interrupt)
{
regs->msr |= REASON_ILLEGAL;
program_check_exception(regs);
}
NOKPROBE_SYMBOL(emulation_assist_interrupt);
-void alignment_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(alignment_exception)
{
enum ctx_state prev_state = exception_enter();
int sig, code, fixed = 0;
@@ -1644,7 +1656,7 @@ void alignment_exception(struct pt_regs *regs)
exception_exit(prev_state);
}
-void StackOverflow(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(StackOverflow)
{
pr_crit("Kernel stack overflow in process %s[%d], r1=%lx\n",
current->comm, task_pid_nr(current), regs->gpr[1]);
@@ -1653,7 +1665,7 @@ void StackOverflow(struct pt_regs *regs)
panic("kernel stack overflow");
}
-void stack_overflow_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(stack_overflow_exception)
{
enum ctx_state prev_state = exception_enter();
@@ -1662,7 +1674,7 @@ void stack_overflow_exception(struct pt_regs *regs)
exception_exit(prev_state);
}
-void kernel_fp_unavailable_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(kernel_fp_unavailable_exception)
{
enum ctx_state prev_state = exception_enter();
@@ -1673,7 +1685,7 @@ void kernel_fp_unavailable_exception(struct pt_regs *regs)
exception_exit(prev_state);
}
-void altivec_unavailable_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(altivec_unavailable_exception)
{
enum ctx_state prev_state = exception_enter();
@@ -1692,7 +1704,7 @@ void altivec_unavailable_exception(struct pt_regs *regs)
exception_exit(prev_state);
}
-void vsx_unavailable_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(vsx_unavailable_exception)
{
if (user_mode(regs)) {
/* A user program has executed an vsx instruction,
@@ -1723,7 +1735,7 @@ static void tm_unavailable(struct pt_regs *regs)
die("Unrecoverable TM Unavailable Exception", regs, SIGABRT);
}
-void facility_unavailable_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(facility_unavailable_exception)
{
static char *facility_strings[] = {
[FSCR_FP_LG] = "FPU",
@@ -1843,7 +1855,7 @@ void facility_unavailable_exception(struct pt_regs *regs)
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-void fp_unavailable_tm(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(fp_unavailable_tm)
{
/* Note: This does not handle any kind of FP laziness. */
@@ -1876,7 +1888,7 @@ void fp_unavailable_tm(struct pt_regs *regs)
tm_recheckpoint(¤t->thread);
}
-void altivec_unavailable_tm(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(altivec_unavailable_tm)
{
/* See the comments in fp_unavailable_tm(). This function operates
* the same way.
@@ -1891,7 +1903,7 @@ void altivec_unavailable_tm(struct pt_regs *regs)
current->thread.used_vr = 1;
}
-void vsx_unavailable_tm(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(vsx_unavailable_tm)
{
/* See the comments in fp_unavailable_tm(). This works similarly,
* though we're loading both FP and VEC registers in here.
@@ -1916,7 +1928,8 @@ void vsx_unavailable_tm(struct pt_regs *regs)
}
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
-static void performance_monitor_exception_nmi(struct pt_regs *regs)
+#ifdef CONFIG_PPC64
+DEFINE_INTERRUPT_HANDLER_NMI(performance_monitor_exception_nmi)
{
nmi_enter();
@@ -1925,9 +1938,12 @@ static void performance_monitor_exception_nmi(struct pt_regs *regs)
perf_irq(regs);
nmi_exit();
+
+ return 0;
}
+#endif
-static void performance_monitor_exception_async(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER_ASYNC(performance_monitor_exception_async)
{
irq_enter();
@@ -1938,7 +1954,7 @@ static void performance_monitor_exception_async(struct pt_regs *regs)
irq_exit();
}
-void performance_monitor_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER_RAW(performance_monitor_exception)
{
/*
* On 64-bit, if perf interrupts hit in a local_irq_disable
@@ -1950,6 +1966,8 @@ void performance_monitor_exception(struct pt_regs *regs)
performance_monitor_exception_nmi(regs);
else
performance_monitor_exception_async(regs);
+
+ return 0;
}
#ifdef CONFIG_PPC_ADV_DEBUG_REGS
@@ -2012,7 +2030,7 @@ static void handle_debug(struct pt_regs *regs, unsigned long debug_status)
mtspr(SPRN_DBCR0, current->thread.debug.dbcr0);
}
-void DebugException(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(DebugException)
{
unsigned long debug_status = regs->dsisr;
@@ -2085,7 +2103,7 @@ NOKPROBE_SYMBOL(DebugException);
#endif /* CONFIG_PPC_ADV_DEBUG_REGS */
#ifdef CONFIG_ALTIVEC
-void altivec_assist_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(altivec_assist_exception)
{
int err;
@@ -2119,7 +2137,7 @@ void altivec_assist_exception(struct pt_regs *regs)
#endif /* CONFIG_ALTIVEC */
#ifdef CONFIG_FSL_BOOKE
-void CacheLockingException(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(CacheLockingException)
{
unsigned long error_code = regs->dsisr;
@@ -2134,7 +2152,7 @@ void CacheLockingException(struct pt_regs *regs)
#endif /* CONFIG_FSL_BOOKE */
#ifdef CONFIG_SPE
-void SPEFloatingPointException(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(SPEFloatingPointException)
{
extern int do_spe_mathemu(struct pt_regs *regs);
unsigned long spefscr;
@@ -2186,7 +2204,7 @@ void SPEFloatingPointException(struct pt_regs *regs)
return;
}
-void SPEFloatingPointRoundException(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(SPEFloatingPointRoundException)
{
extern int speround_handler(struct pt_regs *regs);
int err;
@@ -2228,7 +2246,7 @@ void SPEFloatingPointRoundException(struct pt_regs *regs)
* in the MSR is 0. This indicates that SRR0/1 are live, and that
* we therefore lost state by taking this exception.
*/
-void unrecoverable_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(unrecoverable_exception)
{
pr_emerg("Unrecoverable exception %lx at %lx (msr=%lx)\n",
regs->trap, regs->nip, regs->msr);
@@ -2248,7 +2266,7 @@ void __attribute__ ((weak)) WatchdogHandler(struct pt_regs *regs)
return;
}
-void WatchdogException(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(WatchdogException) /* XXX NMI? async? */
{
printk (KERN_EMERG "PowerPC Book-E Watchdog Exception\n");
WatchdogHandler(regs);
@@ -2259,7 +2277,7 @@ void WatchdogException(struct pt_regs *regs)
* We enter here if we discover during exception entry that we are
* running in supervisor mode with a userspace value in the stack pointer.
*/
-void kernel_bad_stack(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(kernel_bad_stack)
{
printk(KERN_EMERG "Bad kernel stack pointer %lx at %lx\n",
regs->gpr[1], regs->nip);
diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
index af3c15a1d41e..824b9376ac35 100644
--- a/arch/powerpc/kernel/watchdog.c
+++ b/arch/powerpc/kernel/watchdog.c
@@ -26,6 +26,7 @@
#include <linux/delay.h>
#include <linux/smp.h>
+#include <asm/interrupt.h>
#include <asm/paca.h>
/*
@@ -247,14 +248,14 @@ static void watchdog_timer_interrupt(int cpu)
watchdog_smp_panic(cpu, tb);
}
-void soft_nmi_interrupt(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER_NMI(soft_nmi_interrupt)
{
unsigned long flags;
int cpu = raw_smp_processor_id();
u64 tb;
if (!cpumask_test_cpu(cpu, &wd_cpus_enabled))
- return;
+ return 0;
nmi_enter();
@@ -291,6 +292,8 @@ void soft_nmi_interrupt(struct pt_regs *regs)
out:
nmi_exit();
+
+ return 0;
}
static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index d348e77cee20..43e64e60ee33 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -53,6 +53,7 @@
#include <asm/cputable.h>
#include <asm/cacheflush.h>
#include <linux/uaccess.h>
+#include <asm/interrupt.h>
#include <asm/io.h>
#include <asm/kvm_ppc.h>
#include <asm/kvm_book3s.h>
diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c
index 8053efdf7ea7..10fc274bea65 100644
--- a/arch/powerpc/kvm/book3s_hv_builtin.c
+++ b/arch/powerpc/kvm/book3s_hv_builtin.c
@@ -17,6 +17,7 @@
#include <asm/asm-prototypes.h>
#include <asm/cputable.h>
+#include <asm/interrupt.h>
#include <asm/kvm_ppc.h>
#include <asm/kvm_book3s.h>
#include <asm/archrandom.h>
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 288a9820ec01..bd2bb73021d8 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -20,6 +20,7 @@
#include <asm/cputable.h>
#include <linux/uaccess.h>
+#include <asm/interrupt.h>
#include <asm/kvm_ppc.h>
#include <asm/cacheflush.h>
#include <asm/dbell.h>
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index d7d3a80a51d4..824fcb2627e4 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -38,6 +38,7 @@
#include <linux/pgtable.h>
#include <asm/debugfs.h>
+#include <asm/interrupt.h>
#include <asm/processor.h>
#include <asm/mmu.h>
#include <asm/mmu_context.h>
@@ -1512,7 +1513,7 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
}
EXPORT_SYMBOL_GPL(hash_page);
-static long __do_hash_fault(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
{
unsigned long ea = regs->dar;
unsigned long dsisr = regs->dsisr;
@@ -1565,7 +1566,11 @@ static long __do_hash_fault(struct pt_regs *regs)
return err;
}
-long do_hash_fault(struct pt_regs *regs)
+/*
+ * The _RAW interrupt entry checks for the in_nmi() case before
+ * running the full handler.
+ */
+DEFINE_INTERRUPT_HANDLER_RAW(do_hash_fault)
{
unsigned long dsisr = regs->dsisr;
long err;
@@ -1587,7 +1592,7 @@ long do_hash_fault(struct pt_regs *regs)
* the access, or panic if there isn't a handler.
*/
if (unlikely(in_nmi())) {
- bad_page_fault(regs, SIGSEGV);
+ do_bad_page_fault_segv(regs);
return 0;
}
diff --git a/arch/powerpc/mm/book3s64/slb.c b/arch/powerpc/mm/book3s64/slb.c
index 14c62b685f0c..c91bd85eb90e 100644
--- a/arch/powerpc/mm/book3s64/slb.c
+++ b/arch/powerpc/mm/book3s64/slb.c
@@ -10,6 +10,7 @@
*/
#include <asm/asm-prototypes.h>
+#include <asm/interrupt.h>
#include <asm/mmu.h>
#include <asm/mmu_context.h>
#include <asm/paca.h>
@@ -813,7 +814,7 @@ static long slb_allocate_user(struct mm_struct *mm, unsigned long ea)
return slb_insert_entry(ea, context, flags, ssize, false);
}
-long do_slb_fault(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER_RAW(do_slb_fault)
{
unsigned long ea = regs->dar;
unsigned long id = get_region_id(ea);
@@ -833,7 +834,7 @@ long do_slb_fault(struct pt_regs *regs)
*/
/*
- * The interrupt state is not reconciled, for performance, so that
+ * This is a raw interrupt handler, for performance, so that
* fast_interrupt_return can be used. The handler must not touch local
* irq state, or schedule. We could test for usermode and upgrade to a
* normal process context (synchronous) interrupt for those, which
@@ -868,7 +869,7 @@ long do_slb_fault(struct pt_regs *regs)
}
}
-void do_bad_slb_fault(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(do_bad_slb_fault)
{
int err = regs->result;
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 900901d0038e..b84794d53664 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -34,6 +34,7 @@
#include <linux/uaccess.h>
#include <asm/firmware.h>
+#include <asm/interrupt.h>
#include <asm/page.h>
#include <asm/mmu.h>
#include <asm/mmu_context.h>
@@ -540,7 +541,7 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
}
NOKPROBE_SYMBOL(__do_page_fault);
-long do_page_fault(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER_RET(do_page_fault)
{
enum ctx_state prev_state;
long err;
@@ -623,7 +624,7 @@ void bad_page_fault(struct pt_regs *regs, int sig)
}
#ifdef CONFIG_PPC_BOOK3S_64
-void do_bad_page_fault_segv(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(do_bad_page_fault_segv)
{
bad_page_fault(regs, SIGSEGV);
}
diff --git a/arch/powerpc/platforms/cell/ras.c b/arch/powerpc/platforms/cell/ras.c
index 6ea480539419..4325c05bedd9 100644
--- a/arch/powerpc/platforms/cell/ras.c
+++ b/arch/powerpc/platforms/cell/ras.c
@@ -49,7 +49,7 @@ static void dump_fir(int cpu)
}
-void cbe_system_error_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(cbe_system_error_exception)
{
int cpu = smp_processor_id();
@@ -58,7 +58,7 @@ void cbe_system_error_exception(struct pt_regs *regs)
dump_stack();
}
-void cbe_maintenance_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(cbe_maintenance_exception)
{
int cpu = smp_processor_id();
@@ -70,7 +70,7 @@ void cbe_maintenance_exception(struct pt_regs *regs)
dump_stack();
}
-void cbe_thermal_exception(struct pt_regs *regs)
+DEFINE_INTERRUPT_HANDLER(cbe_thermal_exception)
{
int cpu = smp_processor_id();
diff --git a/arch/powerpc/platforms/cell/ras.h b/arch/powerpc/platforms/cell/ras.h
index 6c2e6bc0062e..226dbd48efad 100644
--- a/arch/powerpc/platforms/cell/ras.h
+++ b/arch/powerpc/platforms/cell/ras.h
@@ -2,9 +2,12 @@
#ifndef RAS_H
#define RAS_H
-extern void cbe_system_error_exception(struct pt_regs *regs);
-extern void cbe_maintenance_exception(struct pt_regs *regs);
-extern void cbe_thermal_exception(struct pt_regs *regs);
+#include <asm/interrupt.h>
+
+DECLARE_INTERRUPT_HANDLER(cbe_system_error_exception);
+DECLARE_INTERRUPT_HANDLER(cbe_maintenance_exception);
+DECLARE_INTERRUPT_HANDLER(cbe_thermal_exception);
+
extern void cbe_ras_init(void);
#endif /* RAS_H */
diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
index e6f461812856..999997d9e9a9 100644
--- a/arch/powerpc/platforms/powernv/idle.c
+++ b/arch/powerpc/platforms/powernv/idle.c
@@ -14,6 +14,7 @@
#include <asm/asm-prototypes.h>
#include <asm/firmware.h>
+#include <asm/interrupt.h>
#include <asm/machdep.h>
#include <asm/opal.h>
#include <asm/cputhreads.h>
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 26/39] powerpc: add interrupt_cond_local_irq_enable helper
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (24 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 25/39] powerpc: convert interrupt handlers to use wrappers Nicholas Piggin
@ 2021-01-15 16:49 ` Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 27/39] powerpc/64: context tracking remove _TIF_NOHZ Nicholas Piggin
` (12 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:49 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
Simple helper for synchronous interrupt handlers (i.e., process-context)
to enable interrupts if it was taken in an interrupts-enabled context.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/interrupt.h | 7 +++++++
arch/powerpc/kernel/traps.c | 24 +++++++-----------------
arch/powerpc/mm/fault.c | 4 +---
3 files changed, 15 insertions(+), 20 deletions(-)
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 4ffbd3d75324..488bdd5bd922 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -3,6 +3,7 @@
#define _ASM_POWERPC_INTERRUPT_H
#include <linux/context_tracking.h>
+#include <linux/hardirq.h>
#include <asm/ftrace.h>
struct interrupt_state {
@@ -298,4 +299,10 @@ DECLARE_INTERRUPT_HANDLER_ASYNC(TAUException);
void replay_system_reset(void);
void replay_soft_interrupts(void);
+static inline void interrupt_cond_local_irq_enable(struct pt_regs *regs)
+{
+ if (!arch_irq_disabled_regs(regs))
+ local_irq_enable();
+}
+
#endif /* _ASM_POWERPC_INTERRUPT_H */
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index b000956cde3f..076e5ff75cf7 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -343,8 +343,8 @@ static bool exception_common(int signr, struct pt_regs *regs, int code,
show_signal_msg(signr, regs, code, addr);
- if (arch_irqs_disabled() && !arch_irq_disabled_regs(regs))
- local_irq_enable();
+ if (arch_irqs_disabled())
+ interrupt_cond_local_irq_enable(regs);
current->thread.trap_nr = code;
@@ -1556,9 +1556,7 @@ DEFINE_INTERRUPT_HANDLER(program_check_exception)
if (!user_mode(regs))
goto sigill;
- /* We restore the interrupt state now */
- if (!arch_irq_disabled_regs(regs))
- local_irq_enable();
+ interrupt_cond_local_irq_enable(regs);
/* (reason & REASON_ILLEGAL) would be the obvious thing here,
* but there seems to be a hardware bug on the 405GP (RevD)
@@ -1612,9 +1610,7 @@ DEFINE_INTERRUPT_HANDLER(alignment_exception)
int sig, code, fixed = 0;
unsigned long reason;
- /* We restore the interrupt state now */
- if (!arch_irq_disabled_regs(regs))
- local_irq_enable();
+ interrupt_cond_local_irq_enable(regs);
reason = get_reason(regs);
@@ -1775,9 +1771,7 @@ DEFINE_INTERRUPT_HANDLER(facility_unavailable_exception)
die("Unexpected facility unavailable exception", regs, SIGABRT);
}
- /* We restore the interrupt state now */
- if (!arch_irq_disabled_regs(regs))
- local_irq_enable();
+ interrupt_cond_local_irq_enable(regs);
if (status == FSCR_DSCR_LG) {
/*
@@ -2160,9 +2154,7 @@ DEFINE_INTERRUPT_HANDLER(SPEFloatingPointException)
int code = FPE_FLTUNK;
int err;
- /* We restore the interrupt state now */
- if (!arch_irq_disabled_regs(regs))
- local_irq_enable();
+ interrupt_cond_local_irq_enable(regs);
flush_spe_to_thread(current);
@@ -2209,9 +2201,7 @@ DEFINE_INTERRUPT_HANDLER(SPEFloatingPointRoundException)
extern int speround_handler(struct pt_regs *regs);
int err;
- /* We restore the interrupt state now */
- if (!arch_irq_disabled_regs(regs))
- local_irq_enable();
+ interrupt_cond_local_irq_enable(regs);
preempt_disable();
if (regs->msr & MSR_SPE)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index b84794d53664..eef7281507f9 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -434,9 +434,7 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
return bad_area_nosemaphore(regs, address);
}
- /* We restore the interrupt state now */
- if (!arch_irq_disabled_regs(regs))
- local_irq_enable();
+ interrupt_cond_local_irq_enable(regs);
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 27/39] powerpc/64: context tracking remove _TIF_NOHZ
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (25 preceding siblings ...)
2021-01-15 16:49 ` [PATCH v6 26/39] powerpc: add interrupt_cond_local_irq_enable helper Nicholas Piggin
@ 2021-01-15 16:50 ` Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 28/39] powerpc/64s/hash: improve context tracking of hash faults Nicholas Piggin
` (11 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:50 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
Add context tracking to the system call handler explicitly, and remove
_TIF_NOHZ.
This improves system call performance when nohz_full is enabled. On a
POWER9, gettid scv system call cost on a nohz_full CPU improves from
1129 cycles to 1004 cycles and on a housekeeping CPU from 550 cycles
to 430 cycles.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/Kconfig | 1 -
arch/powerpc/include/asm/thread_info.h | 4 +---
arch/powerpc/kernel/ptrace/ptrace.c | 4 ----
arch/powerpc/kernel/signal.c | 4 ----
arch/powerpc/kernel/syscall_64.c | 10 ++++++++++
5 files changed, 11 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 107bb4319e0e..28d5a1b1510f 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -196,7 +196,6 @@ config PPC
select HAVE_STACKPROTECTOR if PPC64 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r13)
select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r2)
select HAVE_CONTEXT_TRACKING if PPC64
- select HAVE_TIF_NOHZ if PPC64
select HAVE_DEBUG_KMEMLEAK
select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DYNAMIC_FTRACE
diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
index 3d8a47af7a25..386d576673a1 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -94,7 +94,6 @@ void arch_setup_new_exec(void);
#define TIF_PATCH_PENDING 6 /* pending live patching update */
#define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */
#define TIF_SINGLESTEP 8 /* singlestepping active */
-#define TIF_NOHZ 9 /* in adaptive nohz mode */
#define TIF_SECCOMP 10 /* secure computing */
#define TIF_RESTOREALL 11 /* Restore all regs (implies NOERROR) */
#define TIF_NOERROR 12 /* Force successful syscall return */
@@ -128,11 +127,10 @@ void arch_setup_new_exec(void);
#define _TIF_UPROBE (1<<TIF_UPROBE)
#define _TIF_SYSCALL_TRACEPOINT (1<<TIF_SYSCALL_TRACEPOINT)
#define _TIF_EMULATE_STACK_STORE (1<<TIF_EMULATE_STACK_STORE)
-#define _TIF_NOHZ (1<<TIF_NOHZ)
#define _TIF_SYSCALL_EMU (1<<TIF_SYSCALL_EMU)
#define _TIF_SYSCALL_DOTRACE (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
_TIF_SECCOMP | _TIF_SYSCALL_TRACEPOINT | \
- _TIF_NOHZ | _TIF_SYSCALL_EMU)
+ _TIF_SYSCALL_EMU)
#define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NEED_RESCHED | \
_TIF_NOTIFY_RESUME | _TIF_UPROBE | \
diff --git a/arch/powerpc/kernel/ptrace/ptrace.c b/arch/powerpc/kernel/ptrace/ptrace.c
index 3d44b73adb83..4f3d4ff3728c 100644
--- a/arch/powerpc/kernel/ptrace/ptrace.c
+++ b/arch/powerpc/kernel/ptrace/ptrace.c
@@ -262,8 +262,6 @@ long do_syscall_trace_enter(struct pt_regs *regs)
{
u32 flags;
- user_exit();
-
flags = READ_ONCE(current_thread_info()->flags) &
(_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE);
@@ -340,8 +338,6 @@ void do_syscall_trace_leave(struct pt_regs *regs)
step = test_thread_flag(TIF_SINGLESTEP);
if (step || test_thread_flag(TIF_SYSCALL_TRACE))
tracehook_report_syscall_exit(regs, step);
-
- user_enter();
}
void __init pt_regs_check(void);
diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c
index 53782aa60ade..9ded046edb0e 100644
--- a/arch/powerpc/kernel/signal.c
+++ b/arch/powerpc/kernel/signal.c
@@ -282,8 +282,6 @@ static void do_signal(struct task_struct *tsk)
void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags)
{
- user_exit();
-
if (thread_info_flags & _TIF_UPROBE)
uprobe_notify_resume(regs);
@@ -299,8 +297,6 @@ void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags)
tracehook_notify_resume(regs);
rseq_handle_notify_resume(NULL, regs);
}
-
- user_enter();
}
static unsigned long get_tm_stackpointer(struct task_struct *tsk)
diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
index dd87b2118620..d7d256a7a41f 100644
--- a/arch/powerpc/kernel/syscall_64.c
+++ b/arch/powerpc/kernel/syscall_64.c
@@ -1,9 +1,11 @@
// SPDX-License-Identifier: GPL-2.0-or-later
+#include <linux/context_tracking.h>
#include <linux/err.h>
#include <asm/asm-prototypes.h>
#include <asm/kup.h>
#include <asm/cputime.h>
+#include <asm/interrupt.h>
#include <asm/hw_irq.h>
#include <asm/interrupt.h>
#include <asm/kprobes.h>
@@ -28,6 +30,9 @@ notrace long system_call_exception(long r3, long r4, long r5,
if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
BUG_ON(irq_soft_mask_return() != IRQS_ALL_DISABLED);
+ CT_WARN_ON(ct_state() == CONTEXT_KERNEL);
+ user_exit_irqoff();
+
trace_hardirqs_off(); /* finish reconciling */
if (IS_ENABLED(CONFIG_PPC_BOOK3S))
@@ -182,6 +187,8 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
unsigned long ti_flags;
unsigned long ret = 0;
+ CT_WARN_ON(ct_state() == CONTEXT_USER);
+
kuap_check_amr();
regs->result = r3;
@@ -258,8 +265,11 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
}
}
+ user_enter_irqoff();
+
/* scv need not set RI=0 because SRRs are not used */
if (unlikely(!prep_irq_for_enabled_exit(!scv))) {
+ user_exit_irqoff();
local_irq_enable();
goto again;
}
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 28/39] powerpc/64s/hash: improve context tracking of hash faults
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (26 preceding siblings ...)
2021-01-15 16:50 ` [PATCH v6 27/39] powerpc/64: context tracking remove _TIF_NOHZ Nicholas Piggin
@ 2021-01-15 16:50 ` Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 29/39] powerpc/64: context tracking move to interrupt wrappers Nicholas Piggin
` (10 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:50 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
This moves the 64s/hash context tracking from hash_page_mm() to
__do_hash_fault(), so it's no longer called by OCXL / SPU
accelerators, which was certainly the wrong thing to be doing,
because those callers are not low level interrupt handlers, so
should have entered a kernel context tracking already.
Then remain in kernel context for the duration of the fault,
rather than enter/exit for the hash fault then enter/exit for
the page fault, which is pointless.
Even still, calling exception_enter/exit in __do_hash_fault seems
questionable because that's touching per-cpu variables, tracing,
etc., which might have been interrupted by this hash fault or
themselves cause hash faults. But maybe I miss something because
hash_page_mm very deliberately calls trace_hash_fault too, for
example. So for now go with it, it's no worse than before, in this
regard.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/bug.h | 1 +
arch/powerpc/mm/book3s64/hash_utils.c | 7 ++++---
arch/powerpc/mm/fault.c | 30 +++++++++++++++++++++------
3 files changed, 29 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/include/asm/bug.h b/arch/powerpc/include/asm/bug.h
index c10ae0a9bbaf..d1635ffbb179 100644
--- a/arch/powerpc/include/asm/bug.h
+++ b/arch/powerpc/include/asm/bug.h
@@ -112,6 +112,7 @@
struct pt_regs;
long do_page_fault(struct pt_regs *);
+long hash__do_page_fault(struct pt_regs *);
void bad_page_fault(struct pt_regs *, int);
void __bad_page_fault(struct pt_regs *regs, int sig);
void do_bad_page_fault_segv(struct pt_regs *regs);
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 824fcb2627e4..b7b78aeea3eb 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -1289,7 +1289,6 @@ int hash_page_mm(struct mm_struct *mm, unsigned long ea,
unsigned long flags)
{
bool is_thp;
- enum ctx_state prev_state = exception_enter();
pgd_t *pgdir;
unsigned long vsid;
pte_t *ptep;
@@ -1491,7 +1490,6 @@ int hash_page_mm(struct mm_struct *mm, unsigned long ea,
DBG_LOW(" -> rc=%d\n", rc);
bail:
- exception_exit(prev_state);
return rc;
}
EXPORT_SYMBOL_GPL(hash_page_mm);
@@ -1515,6 +1513,7 @@ EXPORT_SYMBOL_GPL(hash_page);
DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
{
+ enum ctx_state prev_state = exception_enter();
unsigned long ea = regs->dar;
unsigned long dsisr = regs->dsisr;
unsigned long access = _PAGE_PRESENT | _PAGE_READ;
@@ -1563,6 +1562,8 @@ DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
err = 0;
}
+ exception_exit(prev_state);
+
return err;
}
@@ -1599,7 +1600,7 @@ DEFINE_INTERRUPT_HANDLER_RAW(do_hash_fault)
err = __do_hash_fault(regs);
if (err) {
page_fault:
- err = do_page_fault(regs);
+ err = hash__do_page_fault(regs);
}
return err;
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index eef7281507f9..620ff623b2c6 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -387,7 +387,7 @@ static void sanity_check_fault(bool is_write, bool is_user,
* The return value is 0 if the fault was handled, or the signal
* number if this is a kernel fault that can't be handled here.
*/
-static int __do_page_fault(struct pt_regs *regs, unsigned long address,
+static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
unsigned long error_code)
{
struct vm_area_struct * vma;
@@ -537,15 +537,13 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
return 0;
}
-NOKPROBE_SYMBOL(__do_page_fault);
+NOKPROBE_SYMBOL(___do_page_fault);
-DEFINE_INTERRUPT_HANDLER_RET(do_page_fault)
+static long __do_page_fault(struct pt_regs *regs)
{
- enum ctx_state prev_state;
long err;
- prev_state = exception_enter();
- err = __do_page_fault(regs, regs->dar, regs->dsisr);
+ err = ___do_page_fault(regs, regs->dar, regs->dsisr);
if (unlikely(err)) {
const struct exception_table_entry *entry;
@@ -560,12 +558,32 @@ DEFINE_INTERRUPT_HANDLER_RET(do_page_fault)
}
}
+ return err;
+}
+NOKPROBE_SYMBOL(__do_page_fault);
+
+DEFINE_INTERRUPT_HANDLER_RET(do_page_fault)
+{
+ enum ctx_state prev_state = exception_enter();
+ long err;
+
+ err = __do_page_fault(regs);
+
exception_exit(prev_state);
return err;
}
NOKPROBE_SYMBOL(do_page_fault);
+#ifdef CONFIG_PPC_BOOK3S_64
+/* Same as do_page_fault but interrupt entry has already run in do_hash_fault */
+long hash__do_page_fault(struct pt_regs *regs)
+{
+ return __do_page_fault(regs);
+}
+NOKPROBE_SYMBOL(hash__do_page_fault);
+#endif
+
/*
* bad_page_fault is called when we have a bad access from the kernel.
* It is called from the DSI and ISI handlers in head.S and from some
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 29/39] powerpc/64: context tracking move to interrupt wrappers
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (27 preceding siblings ...)
2021-01-15 16:50 ` [PATCH v6 28/39] powerpc/64s/hash: improve context tracking of hash faults Nicholas Piggin
@ 2021-01-15 16:50 ` Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 30/39] powerpc/64: add context tracking to asynchronous interrupts Nicholas Piggin
` (9 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:50 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
This moves exception_enter/exit calls to wrapper functions for
synchronous interrupts. More interrupt handlers are covered by
this than previously.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/interrupt.h | 9 ++++
arch/powerpc/kernel/traps.c | 74 ++++++---------------------
arch/powerpc/mm/book3s64/hash_utils.c | 3 --
arch/powerpc/mm/fault.c | 9 +---
4 files changed, 27 insertions(+), 68 deletions(-)
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 488bdd5bd922..e65ce3e2b071 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -7,10 +7,16 @@
#include <asm/ftrace.h>
struct interrupt_state {
+#ifdef CONFIG_PPC64
+ enum ctx_state ctx_state;
+#endif
};
static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrupt_state *state)
{
+#ifdef CONFIG_PPC64
+ state->ctx_state = exception_enter();
+#endif
}
/*
@@ -29,6 +35,9 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
*/
static inline void interrupt_exit_prepare(struct pt_regs *regs, struct interrupt_state *state)
{
+#ifdef CONFIG_PPC64
+ exception_exit(state->ctx_state);
+#endif
}
static inline void interrupt_async_enter_prepare(struct pt_regs *regs, struct interrupt_state *state)
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 076e5ff75cf7..d3892f402b0e 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -1087,41 +1087,28 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(handle_hmi_exception)
DEFINE_INTERRUPT_HANDLER(unknown_exception)
{
- enum ctx_state prev_state = exception_enter();
-
printk("Bad trap at PC: %lx, SR: %lx, vector=%lx\n",
regs->nip, regs->msr, regs->trap);
_exception(SIGTRAP, regs, TRAP_UNK, 0);
-
- exception_exit(prev_state);
}
DEFINE_INTERRUPT_HANDLER_ASYNC(unknown_async_exception)
{
- enum ctx_state prev_state = exception_enter();
-
printk("Bad trap at PC: %lx, SR: %lx, vector=%lx\n",
regs->nip, regs->msr, regs->trap);
_exception(SIGTRAP, regs, TRAP_UNK, 0);
-
- exception_exit(prev_state);
}
DEFINE_INTERRUPT_HANDLER(instruction_breakpoint_exception)
{
- enum ctx_state prev_state = exception_enter();
-
if (notify_die(DIE_IABR_MATCH, "iabr_match", regs, 5,
5, SIGTRAP) == NOTIFY_STOP)
- goto bail;
+ return;
if (debugger_iabr_match(regs))
- goto bail;
+ return;
_exception(SIGTRAP, regs, TRAP_BRKPT, regs->nip);
-
-bail:
- exception_exit(prev_state);
}
DEFINE_INTERRUPT_HANDLER(RunModeException)
@@ -1131,8 +1118,6 @@ DEFINE_INTERRUPT_HANDLER(RunModeException)
DEFINE_INTERRUPT_HANDLER(single_step_exception)
{
- enum ctx_state prev_state = exception_enter();
-
clear_single_step(regs);
clear_br_trace(regs);
@@ -1141,14 +1126,11 @@ DEFINE_INTERRUPT_HANDLER(single_step_exception)
if (notify_die(DIE_SSTEP, "single_step", regs, 5,
5, SIGTRAP) == NOTIFY_STOP)
- goto bail;
+ return;
if (debugger_sstep(regs))
- goto bail;
+ return;
_exception(SIGTRAP, regs, TRAP_TRACE, regs->nip);
-
-bail:
- exception_exit(prev_state);
}
NOKPROBE_SYMBOL(single_step_exception);
@@ -1476,7 +1458,6 @@ static inline int emulate_math(struct pt_regs *regs) { return -1; }
DEFINE_INTERRUPT_HANDLER(program_check_exception)
{
- enum ctx_state prev_state = exception_enter();
unsigned int reason = get_reason(regs);
/* We can now get here via a FP Unavailable exception if the core
@@ -1485,22 +1466,22 @@ DEFINE_INTERRUPT_HANDLER(program_check_exception)
if (reason & REASON_FP) {
/* IEEE FP exception */
parse_fpe(regs);
- goto bail;
+ return;
}
if (reason & REASON_TRAP) {
unsigned long bugaddr;
/* Debugger is first in line to stop recursive faults in
* rcu_lock, notify_die, or atomic_notifier_call_chain */
if (debugger_bpt(regs))
- goto bail;
+ return;
if (kprobe_handler(regs))
- goto bail;
+ return;
/* trap exception */
if (notify_die(DIE_BPT, "breakpoint", regs, 5, 5, SIGTRAP)
== NOTIFY_STOP)
- goto bail;
+ return;
bugaddr = regs->nip;
/*
@@ -1512,10 +1493,10 @@ DEFINE_INTERRUPT_HANDLER(program_check_exception)
if (!(regs->msr & MSR_PR) && /* not user-mode */
report_bug(bugaddr, regs) == BUG_TRAP_TYPE_WARN) {
regs->nip += 4;
- goto bail;
+ return;
}
_exception(SIGTRAP, regs, TRAP_BRKPT, regs->nip);
- goto bail;
+ return;
}
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
if (reason & REASON_TM) {
@@ -1536,7 +1517,7 @@ DEFINE_INTERRUPT_HANDLER(program_check_exception)
*/
if (user_mode(regs)) {
_exception(SIGILL, regs, ILL_ILLOPN, regs->nip);
- goto bail;
+ return;
} else {
printk(KERN_EMERG "Unexpected TM Bad Thing exception "
"at %lx (msr 0x%lx) tm_scratch=%llx\n",
@@ -1567,7 +1548,7 @@ DEFINE_INTERRUPT_HANDLER(program_check_exception)
* pattern to occurrences etc. -dgibson 31/Mar/2003
*/
if (!emulate_math(regs))
- goto bail;
+ return;
/* Try to emulate it if we should. */
if (reason & (REASON_ILLEGAL | REASON_PRIVILEGED)) {
@@ -1575,10 +1556,10 @@ DEFINE_INTERRUPT_HANDLER(program_check_exception)
case 0:
regs->nip += 4;
emulate_single_step(regs);
- goto bail;
+ return;
case -EFAULT:
_exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip);
- goto bail;
+ return;
}
}
@@ -1587,9 +1568,6 @@ DEFINE_INTERRUPT_HANDLER(program_check_exception)
_exception(SIGILL, regs, ILL_PRVOPC, regs->nip);
else
_exception(SIGILL, regs, ILL_ILLOPC, regs->nip);
-
-bail:
- exception_exit(prev_state);
}
NOKPROBE_SYMBOL(program_check_exception);
@@ -1606,14 +1584,12 @@ NOKPROBE_SYMBOL(emulation_assist_interrupt);
DEFINE_INTERRUPT_HANDLER(alignment_exception)
{
- enum ctx_state prev_state = exception_enter();
int sig, code, fixed = 0;
unsigned long reason;
interrupt_cond_local_irq_enable(regs);
reason = get_reason(regs);
-
if (reason & REASON_BOUNDARY) {
sig = SIGBUS;
code = BUS_ADRALN;
@@ -1621,7 +1597,7 @@ DEFINE_INTERRUPT_HANDLER(alignment_exception)
}
if (tm_abort_check(regs, TM_CAUSE_ALIGNMENT | TM_CAUSE_PERSISTENT))
- goto bail;
+ return;
/* we don't implement logging of alignment exceptions */
if (!(current->thread.align_ctl & PR_UNALIGN_SIGBUS))
@@ -1631,7 +1607,7 @@ DEFINE_INTERRUPT_HANDLER(alignment_exception)
/* skip over emulated instruction */
regs->nip += inst_length(reason);
emulate_single_step(regs);
- goto bail;
+ return;
}
/* Operand address was bad */
@@ -1647,9 +1623,6 @@ DEFINE_INTERRUPT_HANDLER(alignment_exception)
_exception(sig, regs, code, regs->dar);
else
bad_page_fault(regs, sig);
-
-bail:
- exception_exit(prev_state);
}
DEFINE_INTERRUPT_HANDLER(StackOverflow)
@@ -1663,41 +1636,28 @@ DEFINE_INTERRUPT_HANDLER(StackOverflow)
DEFINE_INTERRUPT_HANDLER(stack_overflow_exception)
{
- enum ctx_state prev_state = exception_enter();
-
die("Kernel stack overflow", regs, SIGSEGV);
-
- exception_exit(prev_state);
}
DEFINE_INTERRUPT_HANDLER(kernel_fp_unavailable_exception)
{
- enum ctx_state prev_state = exception_enter();
-
printk(KERN_EMERG "Unrecoverable FP Unavailable Exception "
"%lx at %lx\n", regs->trap, regs->nip);
die("Unrecoverable FP Unavailable Exception", regs, SIGABRT);
-
- exception_exit(prev_state);
}
DEFINE_INTERRUPT_HANDLER(altivec_unavailable_exception)
{
- enum ctx_state prev_state = exception_enter();
-
if (user_mode(regs)) {
/* A user program has executed an altivec instruction,
but this kernel doesn't support altivec. */
_exception(SIGILL, regs, ILL_ILLOPC, regs->nip);
- goto bail;
+ return;
}
printk(KERN_EMERG "Unrecoverable VMX/Altivec Unavailable Exception "
"%lx at %lx\n", regs->trap, regs->nip);
die("Unrecoverable VMX/Altivec Unavailable Exception", regs, SIGABRT);
-
-bail:
- exception_exit(prev_state);
}
DEFINE_INTERRUPT_HANDLER(vsx_unavailable_exception)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index b7b78aeea3eb..176f05ad3065 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -1513,7 +1513,6 @@ EXPORT_SYMBOL_GPL(hash_page);
DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
{
- enum ctx_state prev_state = exception_enter();
unsigned long ea = regs->dar;
unsigned long dsisr = regs->dsisr;
unsigned long access = _PAGE_PRESENT | _PAGE_READ;
@@ -1562,8 +1561,6 @@ DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
err = 0;
}
- exception_exit(prev_state);
-
return err;
}
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 620ff623b2c6..24dcaea2b512 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -564,14 +564,7 @@ NOKPROBE_SYMBOL(__do_page_fault);
DEFINE_INTERRUPT_HANDLER_RET(do_page_fault)
{
- enum ctx_state prev_state = exception_enter();
- long err;
-
- err = __do_page_fault(regs);
-
- exception_exit(prev_state);
-
- return err;
+ return __do_page_fault(regs);
}
NOKPROBE_SYMBOL(do_page_fault);
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 30/39] powerpc/64: add context tracking to asynchronous interrupts
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (28 preceding siblings ...)
2021-01-15 16:50 ` [PATCH v6 29/39] powerpc/64: context tracking move to interrupt wrappers Nicholas Piggin
@ 2021-01-15 16:50 ` Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 31/39] powerpc: handle irq_enter/irq_exit in interrupt handler wrappers Nicholas Piggin
` (8 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:50 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
Previously context tracking was not done for asynchronous interrupts,
(those that run in interrupt context), and if those would cause a
reschedule when they exit, then scheduling functions (schedule_user,
preempt_schedule_irq) call exception_enter/exit to fix this up and
exit user context.
This is a hack we would like to get away from, so do context tracking
for asynchronous interrupts too.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/interrupt.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index e65ce3e2b071..f7f64c3c514d 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -42,10 +42,12 @@ static inline void interrupt_exit_prepare(struct pt_regs *regs, struct interrupt
static inline void interrupt_async_enter_prepare(struct pt_regs *regs, struct interrupt_state *state)
{
+ interrupt_enter_prepare(regs, state);
}
static inline void interrupt_async_exit_prepare(struct pt_regs *regs, struct interrupt_state *state)
{
+ interrupt_exit_prepare(regs, state);
}
struct interrupt_nmi_state {
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 31/39] powerpc: handle irq_enter/irq_exit in interrupt handler wrappers
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (29 preceding siblings ...)
2021-01-15 16:50 ` [PATCH v6 30/39] powerpc/64: add context tracking to asynchronous interrupts Nicholas Piggin
@ 2021-01-15 16:50 ` Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 32/39] powerpc/64s: move context tracking exit to interrupt exit path Nicholas Piggin
` (7 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:50 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
Move irq_enter/irq_exit into asynchronous interrupt handler wrappers.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/interrupt.h | 2 ++
arch/powerpc/kernel/dbell.c | 3 +--
arch/powerpc/kernel/irq.c | 4 ----
arch/powerpc/kernel/tau_6xx.c | 3 ---
arch/powerpc/kernel/time.c | 4 ++--
arch/powerpc/kernel/traps.c | 10 +++-------
6 files changed, 8 insertions(+), 18 deletions(-)
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index f7f64c3c514d..5a1395499508 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -43,10 +43,12 @@ static inline void interrupt_exit_prepare(struct pt_regs *regs, struct interrupt
static inline void interrupt_async_enter_prepare(struct pt_regs *regs, struct interrupt_state *state)
{
interrupt_enter_prepare(regs, state);
+ irq_enter();
}
static inline void interrupt_async_exit_prepare(struct pt_regs *regs, struct interrupt_state *state)
{
+ irq_exit();
interrupt_exit_prepare(regs, state);
}
diff --git a/arch/powerpc/kernel/dbell.c b/arch/powerpc/kernel/dbell.c
index 6a7ecfca5c3b..5545c9cd17c1 100644
--- a/arch/powerpc/kernel/dbell.c
+++ b/arch/powerpc/kernel/dbell.c
@@ -23,7 +23,6 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(doorbell_exception)
{
struct pt_regs *old_regs = set_irq_regs(regs);
- irq_enter();
trace_doorbell_entry(regs);
ppc_msgsync();
@@ -36,7 +35,7 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(doorbell_exception)
smp_ipi_demux_relaxed(); /* already performed the barrier */
trace_doorbell_exit(regs);
- irq_exit();
+
set_irq_regs(old_regs);
}
#else /* CONFIG_SMP */
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 2055d204d08e..681abb7c0507 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -641,8 +641,6 @@ void __do_irq(struct pt_regs *regs)
{
unsigned int irq;
- irq_enter();
-
trace_irq_entry(regs);
/*
@@ -662,8 +660,6 @@ void __do_irq(struct pt_regs *regs)
generic_handle_irq(irq);
trace_irq_exit(regs);
-
- irq_exit();
}
DEFINE_INTERRUPT_HANDLER_ASYNC(do_IRQ)
diff --git a/arch/powerpc/kernel/tau_6xx.c b/arch/powerpc/kernel/tau_6xx.c
index 46b2e5de4ef5..d864f07bab74 100644
--- a/arch/powerpc/kernel/tau_6xx.c
+++ b/arch/powerpc/kernel/tau_6xx.c
@@ -104,12 +104,9 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(TAUException)
{
int cpu = smp_processor_id();
- irq_enter();
tau[cpu].interrupts++;
TAUupdate(cpu);
-
- irq_exit();
}
#endif /* CONFIG_TAU_INT */
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 435a251247ed..2177defb7884 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -610,7 +610,7 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(timer_interrupt)
#endif
old_regs = set_irq_regs(regs);
- irq_enter();
+
trace_timer_interrupt_entry(regs);
if (test_irq_work_pending()) {
@@ -635,7 +635,7 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(timer_interrupt)
}
trace_timer_interrupt_exit(regs);
- irq_exit();
+
set_irq_regs(old_regs);
}
EXPORT_SYMBOL(timer_interrupt);
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index d3892f402b0e..f37583d57442 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -801,7 +801,9 @@ void die_mce(const char *str, struct pt_regs *regs, long err)
* do_exit() checks for in_interrupt() and panics in that case, so
* exit the irq/nmi before calling die.
*/
- if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64))
+ if (IS_ENABLED(CONFIG_PPC_BOOK3S_64))
+ irq_exit();
+ else
nmi_exit();
die(str, regs, err);
}
@@ -1061,7 +1063,6 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(handle_hmi_exception)
struct pt_regs *old_regs;
old_regs = set_irq_regs(regs);
- irq_enter();
#ifdef CONFIG_VSX
/* Real mode flagged P9 special emu is needed */
@@ -1081,7 +1082,6 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(handle_hmi_exception)
if (ppc_md.handle_hmi_exception)
ppc_md.handle_hmi_exception(regs);
- irq_exit();
set_irq_regs(old_regs);
}
@@ -1899,13 +1899,9 @@ DEFINE_INTERRUPT_HANDLER_NMI(performance_monitor_exception_nmi)
DEFINE_INTERRUPT_HANDLER_ASYNC(performance_monitor_exception_async)
{
- irq_enter();
-
__this_cpu_inc(irq_stat.pmu_irqs);
perf_irq(regs);
-
- irq_exit();
}
DEFINE_INTERRUPT_HANDLER_RAW(performance_monitor_exception)
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 32/39] powerpc/64s: move context tracking exit to interrupt exit path
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (30 preceding siblings ...)
2021-01-15 16:50 ` [PATCH v6 31/39] powerpc: handle irq_enter/irq_exit in interrupt handler wrappers Nicholas Piggin
@ 2021-01-15 16:50 ` Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 33/39] powerpc/64s: reconcile interrupts in C Nicholas Piggin
` (6 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:50 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
The interrupt handler wrapper functions are not the ideal place to
maintain context tracking because after they return, the low level exit
code must then determine if there are interrupts to replay, or if the
task should be preempted, etc. Those paths (e.g., schedule_user) include
their own exception_enter/exit pairs to fix this up but it's a bit hacky
(see schedule_user() comments).
Ideally context tracking will go to user mode only when there are no
more interrupts or context switches or other exit processing work to
handle.
64e can not do this because it does not use the C interrupt exit code.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/interrupt.h | 34 +++++++++++++++++++++++++---
arch/powerpc/kernel/syscall_64.c | 9 ++++++++
2 files changed, 40 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 5a1395499508..1c966e47b36f 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -7,16 +7,30 @@
#include <asm/ftrace.h>
struct interrupt_state {
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3E_64
enum ctx_state ctx_state;
#endif
};
static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrupt_state *state)
{
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3E_64
state->ctx_state = exception_enter();
#endif
+
+#ifdef CONFIG_PPC_BOOK3S_64
+ if (user_mode(regs)) {
+ CT_WARN_ON(ct_state() != CONTEXT_USER);
+ user_exit_irqoff();
+ } else {
+ /*
+ * CT_WARN_ON comes here via program_check_exception,
+ * so avoid recursion.
+ */
+ if (TRAP(regs) != 0x700)
+ CT_WARN_ON(ct_state() != CONTEXT_KERNEL);
+ }
+#endif
}
/*
@@ -35,9 +49,23 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
*/
static inline void interrupt_exit_prepare(struct pt_regs *regs, struct interrupt_state *state)
{
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3E_64
exception_exit(state->ctx_state);
#endif
+
+ /*
+ * Book3S exits to user via interrupt_exit_user_prepare(), which does
+ * context tracking, which is a cleaner way to handle PREEMPT=y
+ * and avoid context entry/exit in e.g., preempt_schedule_irq()),
+ * which is likely to be where the core code wants to end up.
+ *
+ * The above comment explains why we can't do the
+ *
+ * if (user_mode(regs))
+ * user_exit_irqoff();
+ *
+ * sequence here.
+ */
}
static inline void interrupt_async_enter_prepare(struct pt_regs *regs, struct interrupt_state *state)
diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
index d7d256a7a41f..42f0ad4b2fbb 100644
--- a/arch/powerpc/kernel/syscall_64.c
+++ b/arch/powerpc/kernel/syscall_64.c
@@ -305,6 +305,7 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
BUG_ON(!(regs->msr & MSR_PR));
BUG_ON(!FULL_REGS(regs));
BUG_ON(regs->softe != IRQS_ENABLED);
+ CT_WARN_ON(ct_state() == CONTEXT_USER);
/*
* We don't need to restore AMR on the way back to userspace for KUAP.
@@ -347,7 +348,9 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
}
}
+ user_enter_irqoff();
if (unlikely(!prep_irq_for_enabled_exit(true))) {
+ user_exit_irqoff();
local_irq_enable();
local_irq_disable();
goto again;
@@ -392,6 +395,12 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
unrecoverable_exception(regs);
BUG_ON(regs->msr & MSR_PR);
BUG_ON(!FULL_REGS(regs));
+ /*
+ * CT_WARN_ON comes here via program_check_exception,
+ * so avoid recursion.
+ */
+ if (TRAP(regs) != 0x700)
+ CT_WARN_ON(ct_state() == CONTEXT_USER);
amr = kuap_get_and_check_amr();
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 33/39] powerpc/64s: reconcile interrupts in C
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (31 preceding siblings ...)
2021-01-15 16:50 ` [PATCH v6 32/39] powerpc/64s: move context tracking exit to interrupt exit path Nicholas Piggin
@ 2021-01-15 16:50 ` Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 34/39] powerpc/64: move account_stolen_time into its own function Nicholas Piggin
` (5 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:50 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
There is no need for this to be in asm, use the new intrrupt entry wrapper.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/interrupt.h | 15 +++++++++++----
arch/powerpc/kernel/exceptions-64s.S | 26 --------------------------
2 files changed, 11 insertions(+), 30 deletions(-)
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 1c966e47b36f..e96d215f518a 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -14,11 +14,14 @@ struct interrupt_state {
static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrupt_state *state)
{
-#ifdef CONFIG_PPC_BOOK3E_64
- state->ctx_state = exception_enter();
-#endif
-
+ /*
+ * Book3E reconciles irq soft mask in asm
+ */
#ifdef CONFIG_PPC_BOOK3S_64
+ if (irq_soft_mask_set_return(IRQS_ALL_DISABLED) == IRQS_ENABLED)
+ trace_hardirqs_off();
+ local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
+
if (user_mode(regs)) {
CT_WARN_ON(ct_state() != CONTEXT_USER);
user_exit_irqoff();
@@ -31,6 +34,10 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
CT_WARN_ON(ct_state() != CONTEXT_KERNEL);
}
#endif
+
+#ifdef CONFIG_PPC_BOOK3E_64
+ state->ctx_state = exception_enter();
+#endif
}
/*
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index fe33197ea8fb..39630b3f78b0 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -139,7 +139,6 @@ name:
#define IKVM_VIRT .L_IKVM_VIRT_\name\() /* Virt entry tests KVM */
#define ISTACK .L_ISTACK_\name\() /* Set regular kernel stack */
#define __ISTACK(name) .L_ISTACK_ ## name
-#define IRECONCILE .L_IRECONCILE_\name\() /* Do RECONCILE_IRQ_STATE */
#define IKUAP .L_IKUAP_\name\() /* Do KUAP lock */
#define INT_DEFINE_BEGIN(n) \
@@ -203,9 +202,6 @@ do_define_int n
.ifndef ISTACK
ISTACK=1
.endif
- .ifndef IRECONCILE
- IRECONCILE=1
- .endif
.ifndef IKUAP
IKUAP=1
.endif
@@ -653,10 +649,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
.if ISTACK
ACCOUNT_STOLEN_TIME
.endif
-
- .if IRECONCILE
- RECONCILE_IRQ_STATE(r10, r11)
- .endif
.endm
/*
@@ -935,7 +927,6 @@ INT_DEFINE_BEGIN(system_reset)
*/
ISET_RI=0
ISTACK=0
- IRECONCILE=0
IKVM_REAL=1
INT_DEFINE_END(system_reset)
@@ -1123,7 +1114,6 @@ INT_DEFINE_BEGIN(machine_check_early)
ISTACK=0
IDAR=1
IDSISR=1
- IRECONCILE=0
IKUAP=0 /* We don't touch AMR here, we never go to virtual mode */
INT_DEFINE_END(machine_check_early)
@@ -1483,7 +1473,6 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
INT_DEFINE_BEGIN(data_access_slb)
IVEC=0x380
IAREA=PACA_EXSLB
- IRECONCILE=0
IDAR=1
IKVM_SKIP=1
IKVM_REAL=1
@@ -1510,7 +1499,6 @@ MMU_FTR_SECTION_ELSE
li r3,-EFAULT
ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
std r3,RESULT(r1)
- RECONCILE_IRQ_STATE(r10, r11)
addi r3,r1,STACK_FRAME_OVERHEAD
bl do_bad_slb_fault
b interrupt_return
@@ -1568,7 +1556,6 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
INT_DEFINE_BEGIN(instruction_access_slb)
IVEC=0x480
IAREA=PACA_EXSLB
- IRECONCILE=0
IISIDE=1
IDAR=1
#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
@@ -1597,7 +1584,6 @@ MMU_FTR_SECTION_ELSE
li r3,-EFAULT
ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
std r3,RESULT(r1)
- RECONCILE_IRQ_STATE(r10, r11)
addi r3,r1,STACK_FRAME_OVERHEAD
bl do_bad_slb_fault
b interrupt_return
@@ -1757,7 +1743,6 @@ EXC_COMMON_BEGIN(program_check_common)
*/
INT_DEFINE_BEGIN(fp_unavailable)
IVEC=0x800
- IRECONCILE=0
#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
IKVM_REAL=1
#endif
@@ -1772,7 +1757,6 @@ EXC_VIRT_END(fp_unavailable, 0x4800, 0x100)
EXC_COMMON_BEGIN(fp_unavailable_common)
GEN_COMMON fp_unavailable
bne 1f /* if from user, just load it up */
- RECONCILE_IRQ_STATE(r10, r11)
addi r3,r1,STACK_FRAME_OVERHEAD
bl kernel_fp_unavailable_exception
0: trap
@@ -1791,7 +1775,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
b fast_interrupt_return
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
2: /* User process was in a transaction */
- RECONCILE_IRQ_STATE(r10, r11)
addi r3,r1,STACK_FRAME_OVERHEAD
bl fp_unavailable_tm
b interrupt_return
@@ -1856,7 +1839,6 @@ INT_DEFINE_BEGIN(hdecrementer)
IVEC=0x980
IHSRR=1
ISTACK=0
- IRECONCILE=0
IKVM_REAL=1
IKVM_VIRT=1
INT_DEFINE_END(hdecrementer)
@@ -2230,7 +2212,6 @@ INT_DEFINE_BEGIN(hmi_exception_early)
IHSRR=1
IREALMODE_COMMON=1
ISTACK=0
- IRECONCILE=0
IKUAP=0 /* We don't touch AMR here, we never go to virtual mode */
IKVM_REAL=1
INT_DEFINE_END(hmi_exception_early)
@@ -2404,7 +2385,6 @@ EXC_COMMON_BEGIN(performance_monitor_common)
*/
INT_DEFINE_BEGIN(altivec_unavailable)
IVEC=0xf20
- IRECONCILE=0
#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
IKVM_REAL=1
#endif
@@ -2434,7 +2414,6 @@ BEGIN_FTR_SECTION
b fast_interrupt_return
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
2: /* User process was in a transaction */
- RECONCILE_IRQ_STATE(r10, r11)
addi r3,r1,STACK_FRAME_OVERHEAD
bl altivec_unavailable_tm
b interrupt_return
@@ -2442,7 +2421,6 @@ BEGIN_FTR_SECTION
1:
END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
#endif
- RECONCILE_IRQ_STATE(r10, r11)
addi r3,r1,STACK_FRAME_OVERHEAD
bl altivec_unavailable_exception
b interrupt_return
@@ -2458,7 +2436,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
*/
INT_DEFINE_BEGIN(vsx_unavailable)
IVEC=0xf40
- IRECONCILE=0
#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
IKVM_REAL=1
#endif
@@ -2487,7 +2464,6 @@ BEGIN_FTR_SECTION
b load_up_vsx
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
2: /* User process was in a transaction */
- RECONCILE_IRQ_STATE(r10, r11)
addi r3,r1,STACK_FRAME_OVERHEAD
bl vsx_unavailable_tm
b interrupt_return
@@ -2495,7 +2471,6 @@ BEGIN_FTR_SECTION
1:
END_FTR_SECTION_IFSET(CPU_FTR_VSX)
#endif
- RECONCILE_IRQ_STATE(r10, r11)
addi r3,r1,STACK_FRAME_OVERHEAD
bl vsx_unavailable_exception
b interrupt_return
@@ -2830,7 +2805,6 @@ EXC_VIRT_NONE(0x5800, 0x100)
INT_DEFINE_BEGIN(soft_nmi)
IVEC=0x900
ISTACK=0
- IRECONCILE=0 /* Soft-NMI may fire under local_irq_disable */
INT_DEFINE_END(soft_nmi)
/*
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 34/39] powerpc/64: move account_stolen_time into its own function
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (32 preceding siblings ...)
2021-01-15 16:50 ` [PATCH v6 33/39] powerpc/64s: reconcile interrupts in C Nicholas Piggin
@ 2021-01-15 16:50 ` Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 35/39] powerpc/64: entry cpu time accounting in C Nicholas Piggin
` (4 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:50 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
This will be used by interrupt entry as well.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/cputime.h | 14 ++++++++++++++
arch/powerpc/kernel/syscall_64.c | 10 +---------
2 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/include/asm/cputime.h b/arch/powerpc/include/asm/cputime.h
index ed75d1c318e3..504f7fe6711a 100644
--- a/arch/powerpc/include/asm/cputime.h
+++ b/arch/powerpc/include/asm/cputime.h
@@ -87,6 +87,17 @@ static notrace inline void account_cpu_user_exit(void)
acct->starttime_user = tb;
}
+static notrace inline void account_stolen_time(void)
+{
+#ifdef CONFIG_PPC_SPLPAR
+ if (firmware_has_feature(FW_FEATURE_SPLPAR)) {
+ struct lppaca *lp = local_paca->lppaca_ptr;
+
+ if (unlikely(local_paca->dtl_ridx != be64_to_cpu(lp->dtl_idx)))
+ accumulate_stolen_time();
+ }
+#endif
+}
#endif /* __KERNEL__ */
#else /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE */
@@ -96,5 +107,8 @@ static inline void account_cpu_user_entry(void)
static inline void account_cpu_user_exit(void)
{
}
+static notrace inline void account_stolen_time(void)
+{
+}
#endif /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE */
#endif /* __POWERPC_CPUTIME_H */
diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
index 42f0ad4b2fbb..32f72965da26 100644
--- a/arch/powerpc/kernel/syscall_64.c
+++ b/arch/powerpc/kernel/syscall_64.c
@@ -69,15 +69,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
account_cpu_user_entry();
-#ifdef CONFIG_PPC_SPLPAR
- if (IS_ENABLED(CONFIG_VIRT_CPU_ACCOUNTING_NATIVE) &&
- firmware_has_feature(FW_FEATURE_SPLPAR)) {
- struct lppaca *lp = local_paca->lppaca_ptr;
-
- if (unlikely(local_paca->dtl_ridx != be64_to_cpu(lp->dtl_idx)))
- accumulate_stolen_time();
- }
-#endif
+ account_stolen_time();
/*
* This is not required for the syscall exit path, but makes the
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 35/39] powerpc/64: entry cpu time accounting in C
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (33 preceding siblings ...)
2021-01-15 16:50 ` [PATCH v6 34/39] powerpc/64: move account_stolen_time into its own function Nicholas Piggin
@ 2021-01-15 16:50 ` Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 36/39] powerpc: move NMI entry/exit code into wrapper Nicholas Piggin
` (3 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:50 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
There is no need for this to be in asm, use the new interrupt entry wrapper.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/interrupt.h | 6 ++++++
arch/powerpc/include/asm/ppc_asm.h | 24 ------------------------
arch/powerpc/kernel/exceptions-64e.S | 1 -
arch/powerpc/kernel/exceptions-64s.S | 5 -----
4 files changed, 6 insertions(+), 30 deletions(-)
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index e96d215f518a..ca8e08b18a16 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -4,6 +4,7 @@
#include <linux/context_tracking.h>
#include <linux/hardirq.h>
+#include <asm/cputime.h>
#include <asm/ftrace.h>
struct interrupt_state {
@@ -25,6 +26,9 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
if (user_mode(regs)) {
CT_WARN_ON(ct_state() != CONTEXT_USER);
user_exit_irqoff();
+
+ account_cpu_user_entry();
+ account_stolen_time();
} else {
/*
* CT_WARN_ON comes here via program_check_exception,
@@ -37,6 +41,8 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
#ifdef CONFIG_PPC_BOOK3E_64
state->ctx_state = exception_enter();
+ if (user_mode(regs))
+ account_cpu_user_entry();
#endif
}
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index cc1bca571332..3dceb64fc9af 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -25,7 +25,6 @@
#ifndef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
#define ACCOUNT_CPU_USER_ENTRY(ptr, ra, rb)
#define ACCOUNT_CPU_USER_EXIT(ptr, ra, rb)
-#define ACCOUNT_STOLEN_TIME
#else
#define ACCOUNT_CPU_USER_ENTRY(ptr, ra, rb) \
MFTB(ra); /* get timebase */ \
@@ -44,29 +43,6 @@
PPC_LL ra, ACCOUNT_SYSTEM_TIME(ptr); \
add ra,ra,rb; /* add on to system time */ \
PPC_STL ra, ACCOUNT_SYSTEM_TIME(ptr)
-
-#ifdef CONFIG_PPC_SPLPAR
-#define ACCOUNT_STOLEN_TIME \
-BEGIN_FW_FTR_SECTION; \
- beq 33f; \
- /* from user - see if there are any DTL entries to process */ \
- ld r10,PACALPPACAPTR(r13); /* get ptr to VPA */ \
- ld r11,PACA_DTL_RIDX(r13); /* get log read index */ \
- addi r10,r10,LPPACA_DTLIDX; \
- LDX_BE r10,0,r10; /* get log write index */ \
- cmpd cr1,r11,r10; \
- beq+ cr1,33f; \
- bl accumulate_stolen_time; \
- ld r12,_MSR(r1); \
- andi. r10,r12,MSR_PR; /* Restore cr0 (coming from user) */ \
-33: \
-END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
-
-#else /* CONFIG_PPC_SPLPAR */
-#define ACCOUNT_STOLEN_TIME
-
-#endif /* CONFIG_PPC_SPLPAR */
-
#endif /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE */
/*
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 003999c7836c..e8eb9992a270 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -398,7 +398,6 @@ exc_##n##_common: \
std r10,_NIP(r1); /* save SRR0 to stackframe */ \
std r11,_MSR(r1); /* save SRR1 to stackframe */ \
beq 2f; /* if from kernel mode */ \
- ACCOUNT_CPU_USER_ENTRY(r13,r10,r11);/* accounting (uses cr0+eq) */ \
2: ld r3,excf+EX_R10(r13); /* get back r10 */ \
ld r4,excf+EX_R11(r13); /* get back r11 */ \
mfspr r5,scratch; /* get back r13 */ \
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 39630b3f78b0..94b89ea123f3 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -577,7 +577,6 @@ DEFINE_FIXED_SYMBOL(\name\()_common_real)
kuap_save_amr_and_lock r9, r10, cr1, cr0
.endif
beq 101f /* if from kernel mode */
- ACCOUNT_CPU_USER_ENTRY(r13, r9, r10)
BEGIN_FTR_SECTION
ld r9,IAREA+EX_PPR(r13) /* Read PPR from paca */
std r9,_PPR(r1)
@@ -645,10 +644,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
ld r11,exception_marker@toc(r2)
std r10,RESULT(r1) /* clear regs->result */
std r11,STACK_FRAME_OVERHEAD-16(r1) /* mark the frame */
-
- .if ISTACK
- ACCOUNT_STOLEN_TIME
- .endif
.endm
/*
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 36/39] powerpc: move NMI entry/exit code into wrapper
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (34 preceding siblings ...)
2021-01-15 16:50 ` [PATCH v6 35/39] powerpc/64: entry cpu time accounting in C Nicholas Piggin
@ 2021-01-15 16:50 ` Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 37/39] powerpc/64s: move NMI soft-mask handling to C Nicholas Piggin
` (2 subsequent siblings)
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:50 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
This moves the common NMI entry and exit code into the interrupt handler
wrappers.
This changes the behaviour of soft-NMI (watchdog) and HMI interrupts, and
also MCE interrupts on 64e, by adding missing parts of the NMI entry to
them.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/interrupt.h | 24 +++++++++++++++++++
arch/powerpc/kernel/mce.c | 11 ---------
arch/powerpc/kernel/traps.c | 35 +++++-----------------------
arch/powerpc/kernel/watchdog.c | 10 ++++----
4 files changed, 34 insertions(+), 46 deletions(-)
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index ca8e08b18a16..879a0b2705d6 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -94,14 +94,38 @@ static inline void interrupt_async_exit_prepare(struct pt_regs *regs, struct int
}
struct interrupt_nmi_state {
+#ifdef CONFIG_PPC64
+ u8 ftrace_enabled;
+#endif
};
static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct interrupt_nmi_state *state)
{
+#ifdef CONFIG_PPC64
+ state->ftrace_enabled = this_cpu_get_ftrace_enabled();
+ this_cpu_set_ftrace_enabled(0);
+#endif
+
+ /*
+ * Do not use nmi_enter() for pseries hash guest taking a real-mode
+ * NMI because not everything it touches is within the RMA limit.
+ */
+ if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64) ||
+ !firmware_has_feature(FW_FEATURE_LPAR) ||
+ radix_enabled() || (mfmsr() & MSR_DR))
+ nmi_enter();
}
static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct interrupt_nmi_state *state)
{
+ if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64) ||
+ !firmware_has_feature(FW_FEATURE_LPAR) ||
+ radix_enabled() || (mfmsr() & MSR_DR))
+ nmi_exit();
+
+#ifdef CONFIG_PPC64
+ this_cpu_set_ftrace_enabled(state->ftrace_enabled);
+#endif
}
/**
diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c
index 54269947113d..51456217ec40 100644
--- a/arch/powerpc/kernel/mce.c
+++ b/arch/powerpc/kernel/mce.c
@@ -592,12 +592,6 @@ EXPORT_SYMBOL_GPL(machine_check_print_event_info);
DEFINE_INTERRUPT_HANDLER_NMI(machine_check_early)
{
long handled = 0;
- u8 ftrace_enabled = this_cpu_get_ftrace_enabled();
-
- this_cpu_set_ftrace_enabled(0);
- /* Do not use nmi_enter/exit for pseries hpte guest */
- if (radix_enabled() || !firmware_has_feature(FW_FEATURE_LPAR))
- nmi_enter();
hv_nmi_check_nonrecoverable(regs);
@@ -607,11 +601,6 @@ DEFINE_INTERRUPT_HANDLER_NMI(machine_check_early)
if (ppc_md.machine_check_early)
handled = ppc_md.machine_check_early(regs);
- if (radix_enabled() || !firmware_has_feature(FW_FEATURE_LPAR))
- nmi_exit();
-
- this_cpu_set_ftrace_enabled(ftrace_enabled);
-
return handled;
}
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index f37583d57442..9e5574756689 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -435,11 +435,6 @@ DEFINE_INTERRUPT_HANDLER_NMI(system_reset_exception)
{
unsigned long hsrr0, hsrr1;
bool saved_hsrrs = false;
- u8 ftrace_enabled = this_cpu_get_ftrace_enabled();
-
- this_cpu_set_ftrace_enabled(0);
-
- nmi_enter();
/*
* System reset can interrupt code where HSRRs are live and MSR[RI]=1.
@@ -514,10 +509,6 @@ DEFINE_INTERRUPT_HANDLER_NMI(system_reset_exception)
mtspr(SPRN_HSRR1, hsrr1);
}
- nmi_exit();
-
- this_cpu_set_ftrace_enabled(ftrace_enabled);
-
/* What should we do here? We could issue a shutdown or hard reset. */
return 0;
@@ -809,6 +800,12 @@ void die_mce(const char *str, struct pt_regs *regs, long err)
}
NOKPROBE_SYMBOL(die_mce);
+/*
+ * BOOK3S_64 does not call this handler as a non-maskable interrupt
+ * (it uses its own early real-mode handler to handle the MCE proper
+ * and then raises irq_work to call this handler when interrupts are
+ * enabled).
+ */
#ifdef CONFIG_PPC_BOOK3S_64
DEFINE_INTERRUPT_HANDLER_ASYNC(machine_check_exception)
#else
@@ -817,20 +814,6 @@ DEFINE_INTERRUPT_HANDLER_NMI(machine_check_exception)
{
int recover = 0;
- /*
- * BOOK3S_64 does not call this handler as a non-maskable interrupt
- * (it uses its own early real-mode handler to handle the MCE proper
- * and then raises irq_work to call this handler when interrupts are
- * enabled).
- *
- * This is silly. The BOOK3S_64 should just call a different function
- * rather than expecting semantics to magically change. Something
- * like 'non_nmi_machine_check_exception()', perhaps?
- */
- const bool nmi = !IS_ENABLED(CONFIG_PPC_BOOK3S_64);
-
- if (nmi) nmi_enter();
-
__this_cpu_inc(irq_stat.mce_exceptions);
add_taint(TAINT_MACHINE_CHECK, LOCKDEP_NOW_UNRELIABLE);
@@ -862,8 +845,6 @@ DEFINE_INTERRUPT_HANDLER_NMI(machine_check_exception)
if (!(regs->msr & MSR_RI))
die_mce("Unrecoverable Machine check", regs, SIGBUS);
- if (nmi) nmi_exit();
-
#ifdef CONFIG_PPC_BOOK3S_64
return;
#else
@@ -1885,14 +1866,10 @@ DEFINE_INTERRUPT_HANDLER(vsx_unavailable_tm)
#ifdef CONFIG_PPC64
DEFINE_INTERRUPT_HANDLER_NMI(performance_monitor_exception_nmi)
{
- nmi_enter();
-
__this_cpu_inc(irq_stat.pmu_irqs);
perf_irq(regs);
- nmi_exit();
-
return 0;
}
#endif
diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
index 824b9376ac35..dc39534836a3 100644
--- a/arch/powerpc/kernel/watchdog.c
+++ b/arch/powerpc/kernel/watchdog.c
@@ -254,11 +254,12 @@ DEFINE_INTERRUPT_HANDLER_NMI(soft_nmi_interrupt)
int cpu = raw_smp_processor_id();
u64 tb;
+ /* should only arrive from kernel, with irqs disabled */
+ WARN_ON_ONCE(!arch_irq_disabled_regs(regs));
+
if (!cpumask_test_cpu(cpu, &wd_cpus_enabled))
return 0;
- nmi_enter();
-
__this_cpu_inc(irq_stat.soft_nmi_irqs);
tb = get_tb();
@@ -266,7 +267,7 @@ DEFINE_INTERRUPT_HANDLER_NMI(soft_nmi_interrupt)
wd_smp_lock(&flags);
if (cpumask_test_cpu(cpu, &wd_smp_cpus_stuck)) {
wd_smp_unlock(&flags);
- goto out;
+ return 0;
}
set_cpu_stuck(cpu, tb);
@@ -290,9 +291,6 @@ DEFINE_INTERRUPT_HANDLER_NMI(soft_nmi_interrupt)
if (wd_panic_timeout_tb < 0x7fffffff)
mtspr(SPRN_DEC, wd_panic_timeout_tb);
-out:
- nmi_exit();
-
return 0;
}
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 37/39] powerpc/64s: move NMI soft-mask handling to C
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (35 preceding siblings ...)
2021-01-15 16:50 ` [PATCH v6 36/39] powerpc: move NMI entry/exit code into wrapper Nicholas Piggin
@ 2021-01-15 16:50 ` Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 38/39] powerpc/64s: runlatch interrupt handling in C Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 39/39] powerpc/64s: power4 nap fixup " Nicholas Piggin
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:50 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
Saving and restoring soft-mask state can now be done in C using the
interrupt handler wrapper functions.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/interrupt.h | 26 ++++++++++++
arch/powerpc/kernel/exceptions-64s.S | 60 ----------------------------
2 files changed, 26 insertions(+), 60 deletions(-)
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 879a0b2705d6..5f4e304a98d9 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -95,6 +95,10 @@ static inline void interrupt_async_exit_prepare(struct pt_regs *regs, struct int
struct interrupt_nmi_state {
#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S_64
+ u8 irq_soft_mask;
+ u8 irq_happened;
+#endif
u8 ftrace_enabled;
#endif
};
@@ -102,6 +106,21 @@ struct interrupt_nmi_state {
static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct interrupt_nmi_state *state)
{
#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S_64
+ state->irq_soft_mask = local_paca->irq_soft_mask;
+ state->irq_happened = local_paca->irq_happened;
+
+ /*
+ * Set IRQS_ALL_DISABLED unconditionally so irqs_disabled() does
+ * the right thing, and set IRQ_HARD_DIS. We do not want to reconcile
+ * because that goes through irq tracing which we don't want in NMI.
+ */
+ local_paca->irq_soft_mask = IRQS_ALL_DISABLED;
+ local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
+
+ /* Don't do any per-CPU operations until interrupt state is fixed */
+ state->ftrace_enabled = this_cpu_get_ftrace_enabled();
+#endif
state->ftrace_enabled = this_cpu_get_ftrace_enabled();
this_cpu_set_ftrace_enabled(0);
#endif
@@ -125,6 +144,13 @@ static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct inter
#ifdef CONFIG_PPC64
this_cpu_set_ftrace_enabled(state->ftrace_enabled);
+
+#ifdef CONFIG_PPC_BOOK3S_64
+ /* Check we didn't change the pending interrupt mask. */
+ WARN_ON_ONCE((state->irq_happened | PACA_IRQ_HARD_DIS) != local_paca->irq_happened);
+ local_paca->irq_happened = state->irq_happened;
+ local_paca->irq_soft_mask = state->irq_soft_mask;
+#endif
#endif
}
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 94b89ea123f3..2fca2bad6b02 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1008,20 +1008,6 @@ EXC_COMMON_BEGIN(system_reset_common)
ld r1,PACA_NMI_EMERG_SP(r13)
subi r1,r1,INT_FRAME_SIZE
__GEN_COMMON_BODY system_reset
- /*
- * Set IRQS_ALL_DISABLED unconditionally so irqs_disabled() does
- * the right thing. We do not want to reconcile because that goes
- * through irq tracing which we don't want in NMI.
- *
- * Save PACAIRQHAPPENED to RESULT (otherwise unused), and set HARD_DIS
- * as we are running with MSR[EE]=0.
- */
- li r10,IRQS_ALL_DISABLED
- stb r10,PACAIRQSOFTMASK(r13)
- lbz r10,PACAIRQHAPPENED(r13)
- std r10,RESULT(r1)
- ori r10,r10,PACA_IRQ_HARD_DIS
- stb r10,PACAIRQHAPPENED(r13)
addi r3,r1,STACK_FRAME_OVERHEAD
bl system_reset_exception
@@ -1037,14 +1023,6 @@ EXC_COMMON_BEGIN(system_reset_common)
subi r10,r10,1
sth r10,PACA_IN_NMI(r13)
- /*
- * Restore soft mask settings.
- */
- ld r10,RESULT(r1)
- stb r10,PACAIRQHAPPENED(r13)
- ld r10,SOFTE(r1)
- stb r10,PACAIRQSOFTMASK(r13)
-
kuap_kernel_restore r9, r10
EXCEPTION_RESTORE_REGS
RFI_TO_USER_OR_KERNEL
@@ -1190,30 +1168,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
li r10,MSR_RI
mtmsrd r10,1
- /*
- * Set IRQS_ALL_DISABLED and save PACAIRQHAPPENED (see
- * system_reset_common)
- */
- li r10,IRQS_ALL_DISABLED
- stb r10,PACAIRQSOFTMASK(r13)
- lbz r10,PACAIRQHAPPENED(r13)
- std r10,RESULT(r1)
- ori r10,r10,PACA_IRQ_HARD_DIS
- stb r10,PACAIRQHAPPENED(r13)
-
addi r3,r1,STACK_FRAME_OVERHEAD
bl machine_check_early
std r3,RESULT(r1) /* Save result */
ld r12,_MSR(r1)
- /*
- * Restore soft mask settings.
- */
- ld r10,RESULT(r1)
- stb r10,PACAIRQHAPPENED(r13)
- ld r10,SOFTE(r1)
- stb r10,PACAIRQSOFTMASK(r13)
-
#ifdef CONFIG_PPC_P7_NAP
/*
* Check if thread was in power saving mode. We come here when any
@@ -2818,17 +2777,6 @@ EXC_COMMON_BEGIN(soft_nmi_common)
subi r1,r1,INT_FRAME_SIZE
__GEN_COMMON_BODY soft_nmi
- /*
- * Set IRQS_ALL_DISABLED and save PACAIRQHAPPENED (see
- * system_reset_common)
- */
- li r10,IRQS_ALL_DISABLED
- stb r10,PACAIRQSOFTMASK(r13)
- lbz r10,PACAIRQHAPPENED(r13)
- std r10,RESULT(r1)
- ori r10,r10,PACA_IRQ_HARD_DIS
- stb r10,PACAIRQHAPPENED(r13)
-
addi r3,r1,STACK_FRAME_OVERHEAD
bl soft_nmi_interrupt
@@ -2836,14 +2784,6 @@ EXC_COMMON_BEGIN(soft_nmi_common)
li r9,0
mtmsrd r9,1
- /*
- * Restore soft mask settings.
- */
- ld r10,RESULT(r1)
- stb r10,PACAIRQHAPPENED(r13)
- ld r10,SOFTE(r1)
- stb r10,PACAIRQSOFTMASK(r13)
-
kuap_kernel_restore r9, r10
EXCEPTION_RESTORE_REGS hsrr=0
RFI_TO_KERNEL
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 38/39] powerpc/64s: runlatch interrupt handling in C
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (36 preceding siblings ...)
2021-01-15 16:50 ` [PATCH v6 37/39] powerpc/64s: move NMI soft-mask handling to C Nicholas Piggin
@ 2021-01-15 16:50 ` Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 39/39] powerpc/64s: power4 nap fixup " Nicholas Piggin
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:50 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
There is no need for this to be in asm, use the new intrrupt entry wrapper.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/interrupt.h | 7 +++++++
arch/powerpc/kernel/exceptions-64s.S | 18 ------------------
2 files changed, 7 insertions(+), 18 deletions(-)
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 5f4e304a98d9..5e3b290bf3ae 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -6,6 +6,7 @@
#include <linux/hardirq.h>
#include <asm/cputime.h>
#include <asm/ftrace.h>
+#include <asm/runlatch.h>
struct interrupt_state {
#ifdef CONFIG_PPC_BOOK3E_64
@@ -83,6 +84,12 @@ static inline void interrupt_exit_prepare(struct pt_regs *regs, struct interrupt
static inline void interrupt_async_enter_prepare(struct pt_regs *regs, struct interrupt_state *state)
{
+#ifdef CONFIG_PPC_BOOK3S_64
+ if (cpu_has_feature(CPU_FTR_CTRL) &&
+ !test_thread_local_flags(_TLF_RUNLATCH))
+ __ppc64_runlatch_on();
+#endif
+
interrupt_enter_prepare(regs, state);
irq_enter();
}
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 2fca2bad6b02..27351276c54b 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -692,14 +692,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
ld r1,GPR1(r1)
.endm
-#define RUNLATCH_ON \
-BEGIN_FTR_SECTION \
- ld r3, PACA_THREAD_INFO(r13); \
- ld r4,TI_LOCAL_FLAGS(r3); \
- andi. r0,r4,_TLF_RUNLATCH; \
- beql ppc64_runlatch_on_trampoline; \
-END_FTR_SECTION_IFSET(CPU_FTR_CTRL)
-
/*
* When the idle code in power4_idle puts the CPU into NAP mode,
* it has to do so in a loop, and relies on the external interrupt
@@ -1585,7 +1577,6 @@ EXC_VIRT_END(hardware_interrupt, 0x4500, 0x100)
EXC_COMMON_BEGIN(hardware_interrupt_common)
GEN_COMMON hardware_interrupt
FINISH_NAP
- RUNLATCH_ON
addi r3,r1,STACK_FRAME_OVERHEAD
bl do_IRQ
b interrupt_return
@@ -1771,7 +1762,6 @@ EXC_VIRT_END(decrementer, 0x4900, 0x80)
EXC_COMMON_BEGIN(decrementer_common)
GEN_COMMON decrementer
FINISH_NAP
- RUNLATCH_ON
addi r3,r1,STACK_FRAME_OVERHEAD
bl timer_interrupt
b interrupt_return
@@ -1857,7 +1847,6 @@ EXC_VIRT_END(doorbell_super, 0x4a00, 0x100)
EXC_COMMON_BEGIN(doorbell_super_common)
GEN_COMMON doorbell_super
FINISH_NAP
- RUNLATCH_ON
addi r3,r1,STACK_FRAME_OVERHEAD
#ifdef CONFIG_PPC_DOORBELL
bl doorbell_exception
@@ -2212,7 +2201,6 @@ EXC_COMMON_BEGIN(hmi_exception_early_common)
EXC_COMMON_BEGIN(hmi_exception_common)
GEN_COMMON hmi_exception
FINISH_NAP
- RUNLATCH_ON
addi r3,r1,STACK_FRAME_OVERHEAD
bl handle_hmi_exception
b interrupt_return
@@ -2242,7 +2230,6 @@ EXC_VIRT_END(h_doorbell, 0x4e80, 0x20)
EXC_COMMON_BEGIN(h_doorbell_common)
GEN_COMMON h_doorbell
FINISH_NAP
- RUNLATCH_ON
addi r3,r1,STACK_FRAME_OVERHEAD
#ifdef CONFIG_PPC_DOORBELL
bl doorbell_exception
@@ -2276,7 +2263,6 @@ EXC_VIRT_END(h_virt_irq, 0x4ea0, 0x20)
EXC_COMMON_BEGIN(h_virt_irq_common)
GEN_COMMON h_virt_irq
FINISH_NAP
- RUNLATCH_ON
addi r3,r1,STACK_FRAME_OVERHEAD
bl do_IRQ
b interrupt_return
@@ -2323,7 +2309,6 @@ EXC_VIRT_END(performance_monitor, 0x4f00, 0x20)
EXC_COMMON_BEGIN(performance_monitor_common)
GEN_COMMON performance_monitor
FINISH_NAP
- RUNLATCH_ON
addi r3,r1,STACK_FRAME_OVERHEAD
bl performance_monitor_exception
b interrupt_return
@@ -3057,9 +3042,6 @@ kvmppc_skip_Hinterrupt:
* come here.
*/
-EXC_COMMON_BEGIN(ppc64_runlatch_on_trampoline)
- b __ppc64_runlatch_on
-
USE_FIXED_SECTION(virt_trampolines)
/*
* All code below __end_interrupts is treated as soft-masked. If
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v6 39/39] powerpc/64s: power4 nap fixup in C
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
` (37 preceding siblings ...)
2021-01-15 16:50 ` [PATCH v6 38/39] powerpc/64s: runlatch interrupt handling in C Nicholas Piggin
@ 2021-01-15 16:50 ` Nicholas Piggin
38 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-15 16:50 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
There is no need for this to be in asm, use the new intrrupt entry wrapper.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/interrupt.h | 15 +++++++++
arch/powerpc/include/asm/processor.h | 1 +
arch/powerpc/include/asm/thread_info.h | 6 ++++
arch/powerpc/kernel/exceptions-64s.S | 45 --------------------------
arch/powerpc/kernel/idle_book3s.S | 4 +++
5 files changed, 26 insertions(+), 45 deletions(-)
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 5e3b290bf3ae..c48ded0df2bc 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -8,6 +8,16 @@
#include <asm/ftrace.h>
#include <asm/runlatch.h>
+static inline void nap_adjust_return(struct pt_regs *regs)
+{
+#ifdef CONFIG_PPC_970_NAP
+ if (unlikely(test_thread_local_flags(_TLF_NAPPING))) {
+ clear_thread_local_flags(_TLF_NAPPING);
+ regs->nip = (unsigned long)power4_idle_nap_return;
+ }
+#endif
+}
+
struct interrupt_state {
#ifdef CONFIG_PPC_BOOK3E_64
enum ctx_state ctx_state;
@@ -98,6 +108,9 @@ static inline void interrupt_async_exit_prepare(struct pt_regs *regs, struct int
{
irq_exit();
interrupt_exit_prepare(regs, state);
+
+ /* Adjust at exit so the main handler sees the true NIA */
+ nap_adjust_return(regs);
}
struct interrupt_nmi_state {
@@ -149,6 +162,8 @@ static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct inter
radix_enabled() || (mfmsr() & MSR_DR))
nmi_exit();
+ nap_adjust_return(regs);
+
#ifdef CONFIG_PPC64
this_cpu_set_ftrace_enabled(state->ftrace_enabled);
diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index 8acc3590c971..eedc3c775141 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -393,6 +393,7 @@ extern unsigned long isa300_idle_stop_mayloss(unsigned long psscr_val);
extern unsigned long isa206_idle_insn_mayloss(unsigned long type);
#ifdef CONFIG_PPC_970_NAP
extern void power4_idle_nap(void);
+void power4_idle_nap_return(void);
#endif
extern unsigned long cpuidle_disable;
diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
index 386d576673a1..bf137151100b 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -152,6 +152,12 @@ void arch_setup_new_exec(void);
#ifndef __ASSEMBLY__
+static inline void clear_thread_local_flags(unsigned int flags)
+{
+ struct thread_info *ti = current_thread_info();
+ ti->local_flags &= ~flags;
+}
+
static inline bool test_thread_local_flags(unsigned int flags)
{
struct thread_info *ti = current_thread_info();
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 27351276c54b..5478ffa85603 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -692,25 +692,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
ld r1,GPR1(r1)
.endm
-/*
- * When the idle code in power4_idle puts the CPU into NAP mode,
- * it has to do so in a loop, and relies on the external interrupt
- * and decrementer interrupt entry code to get it out of the loop.
- * It sets the _TLF_NAPPING bit in current_thread_info()->local_flags
- * to signal that it is in the loop and needs help to get out.
- */
-#ifdef CONFIG_PPC_970_NAP
-#define FINISH_NAP \
-BEGIN_FTR_SECTION \
- ld r11, PACA_THREAD_INFO(r13); \
- ld r9,TI_LOCAL_FLAGS(r11); \
- andi. r10,r9,_TLF_NAPPING; \
- bnel power4_fixup_nap; \
-END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP)
-#else
-#define FINISH_NAP
-#endif
-
/*
* There are a few constraints to be concerned with.
* - Real mode exceptions code/data must be located at their physical location.
@@ -1248,7 +1229,6 @@ EXC_COMMON_BEGIN(machine_check_common)
*/
GEN_COMMON machine_check
- FINISH_NAP
/* Enable MSR_RI when finished with PACA_EXMC */
li r10,MSR_RI
mtmsrd r10,1
@@ -1576,7 +1556,6 @@ EXC_VIRT_BEGIN(hardware_interrupt, 0x4500, 0x100)
EXC_VIRT_END(hardware_interrupt, 0x4500, 0x100)
EXC_COMMON_BEGIN(hardware_interrupt_common)
GEN_COMMON hardware_interrupt
- FINISH_NAP
addi r3,r1,STACK_FRAME_OVERHEAD
bl do_IRQ
b interrupt_return
@@ -1761,7 +1740,6 @@ EXC_VIRT_BEGIN(decrementer, 0x4900, 0x80)
EXC_VIRT_END(decrementer, 0x4900, 0x80)
EXC_COMMON_BEGIN(decrementer_common)
GEN_COMMON decrementer
- FINISH_NAP
addi r3,r1,STACK_FRAME_OVERHEAD
bl timer_interrupt
b interrupt_return
@@ -1846,7 +1824,6 @@ EXC_VIRT_BEGIN(doorbell_super, 0x4a00, 0x100)
EXC_VIRT_END(doorbell_super, 0x4a00, 0x100)
EXC_COMMON_BEGIN(doorbell_super_common)
GEN_COMMON doorbell_super
- FINISH_NAP
addi r3,r1,STACK_FRAME_OVERHEAD
#ifdef CONFIG_PPC_DOORBELL
bl doorbell_exception
@@ -2200,7 +2177,6 @@ EXC_COMMON_BEGIN(hmi_exception_early_common)
EXC_COMMON_BEGIN(hmi_exception_common)
GEN_COMMON hmi_exception
- FINISH_NAP
addi r3,r1,STACK_FRAME_OVERHEAD
bl handle_hmi_exception
b interrupt_return
@@ -2229,7 +2205,6 @@ EXC_VIRT_BEGIN(h_doorbell, 0x4e80, 0x20)
EXC_VIRT_END(h_doorbell, 0x4e80, 0x20)
EXC_COMMON_BEGIN(h_doorbell_common)
GEN_COMMON h_doorbell
- FINISH_NAP
addi r3,r1,STACK_FRAME_OVERHEAD
#ifdef CONFIG_PPC_DOORBELL
bl doorbell_exception
@@ -2262,7 +2237,6 @@ EXC_VIRT_BEGIN(h_virt_irq, 0x4ea0, 0x20)
EXC_VIRT_END(h_virt_irq, 0x4ea0, 0x20)
EXC_COMMON_BEGIN(h_virt_irq_common)
GEN_COMMON h_virt_irq
- FINISH_NAP
addi r3,r1,STACK_FRAME_OVERHEAD
bl do_IRQ
b interrupt_return
@@ -2308,7 +2282,6 @@ EXC_VIRT_BEGIN(performance_monitor, 0x4f00, 0x20)
EXC_VIRT_END(performance_monitor, 0x4f00, 0x20)
EXC_COMMON_BEGIN(performance_monitor_common)
GEN_COMMON performance_monitor
- FINISH_NAP
addi r3,r1,STACK_FRAME_OVERHEAD
bl performance_monitor_exception
b interrupt_return
@@ -3059,24 +3032,6 @@ USE_FIXED_SECTION(virt_trampolines)
__end_interrupts:
DEFINE_FIXED_SYMBOL(__end_interrupts)
-#ifdef CONFIG_PPC_970_NAP
- /*
- * Called by exception entry code if _TLF_NAPPING was set, this clears
- * the NAPPING flag, and redirects the exception exit to
- * power4_fixup_nap_return.
- */
- .globl power4_fixup_nap
-EXC_COMMON_BEGIN(power4_fixup_nap)
- andc r9,r9,r10
- std r9,TI_LOCAL_FLAGS(r11)
- LOAD_REG_ADDR(r10, power4_idle_nap_return)
- std r10,_NIP(r1)
- blr
-
-power4_idle_nap_return:
- blr
-#endif
-
CLOSE_FIXED_SECTION(real_vectors);
CLOSE_FIXED_SECTION(real_trampolines);
CLOSE_FIXED_SECTION(virt_vectors);
diff --git a/arch/powerpc/kernel/idle_book3s.S b/arch/powerpc/kernel/idle_book3s.S
index 22f249b6f58d..27d2e6a72ec9 100644
--- a/arch/powerpc/kernel/idle_book3s.S
+++ b/arch/powerpc/kernel/idle_book3s.S
@@ -201,4 +201,8 @@ _GLOBAL(power4_idle_nap)
mtmsrd r7
isync
b 1b
+
+ .globl power4_idle_nap_return
+power4_idle_nap_return:
+ blr
#endif
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* Re: [PATCH v6 07/39] powerpc: bad_page_fault get registers from regs
2021-01-15 16:49 ` [PATCH v6 07/39] powerpc: bad_page_fault " Nicholas Piggin
@ 2021-01-15 17:09 ` Christophe Leroy
2021-01-16 0:42 ` Nicholas Piggin
0 siblings, 1 reply; 56+ messages in thread
From: Christophe Leroy @ 2021-01-15 17:09 UTC (permalink / raw)
To: Nicholas Piggin, linuxppc-dev
Le 15/01/2021 à 17:49, Nicholas Piggin a écrit :
> Similar to the previous patch this makes interrupt handler function
> types more regular so they can be wrapped with the next patch.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> arch/powerpc/include/asm/bug.h | 5 +++--
> arch/powerpc/kernel/entry_32.S | 3 +--
> arch/powerpc/kernel/exceptions-64e.S | 3 +--
> arch/powerpc/kernel/exceptions-64s.S | 4 +---
> arch/powerpc/kernel/traps.c | 2 +-
> arch/powerpc/mm/book3s64/hash_utils.c | 4 ++--
> arch/powerpc/mm/book3s64/slb.c | 2 +-
> arch/powerpc/mm/fault.c | 13 ++++++++++---
> arch/powerpc/platforms/8xx/machine_check.c | 2 +-
> 9 files changed, 21 insertions(+), 17 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/bug.h b/arch/powerpc/include/asm/bug.h
> index f7827e993196..8f09ddae9305 100644
> --- a/arch/powerpc/include/asm/bug.h
> +++ b/arch/powerpc/include/asm/bug.h
> @@ -112,8 +112,9 @@
>
> struct pt_regs;
> long do_page_fault(struct pt_regs *);
> -extern void bad_page_fault(struct pt_regs *, unsigned long, int);
> -void __bad_page_fault(struct pt_regs *regs, unsigned long address, int sig);
> +void bad_page_fault(struct pt_regs *, int);
> +void __bad_page_fault(struct pt_regs *regs, int sig);
> +void do_bad_page_fault_segv(struct pt_regs *regs);
What is that do_bad_page_fault_segv() ? Shouldn't it be in a separate patch ?
> extern void _exception(int, struct pt_regs *, int, unsigned long);
> extern void _exception_pkey(struct pt_regs *, unsigned long, int);
> extern void die(const char *, struct pt_regs *, long);
> diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
> index d6ea3f2d6cc0..b102b40c4988 100644
> --- a/arch/powerpc/kernel/entry_32.S
> +++ b/arch/powerpc/kernel/entry_32.S
> @@ -672,9 +672,8 @@ handle_page_fault:
> lwz r0,_TRAP(r1)
> clrrwi r0,r0,1
> stw r0,_TRAP(r1)
> - mr r5,r3
> + mr r4,r3 /* err arg for bad_page_fault */
> addi r3,r1,STACK_FRAME_OVERHEAD
> - lwz r4,_DAR(r1)
> bl __bad_page_fault
> b ret_from_except_full
>
> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
> index 43e71d86dcbf..52421042a020 100644
> --- a/arch/powerpc/kernel/exceptions-64e.S
> +++ b/arch/powerpc/kernel/exceptions-64e.S
> @@ -1018,9 +1018,8 @@ storage_fault_common:
> bne- 1f
> b ret_from_except_lite
> 1: bl save_nvgprs
> - mr r5,r3
> + mr r4,r3
> addi r3,r1,STACK_FRAME_OVERHEAD
> - ld r4,_DAR(r1)
> bl __bad_page_fault
> b ret_from_except
>
> diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
> index 839dcb94eea7..b90d3cde14cf 100644
> --- a/arch/powerpc/kernel/exceptions-64s.S
> +++ b/arch/powerpc/kernel/exceptions-64s.S
> @@ -2151,9 +2151,7 @@ EXC_COMMON_BEGIN(h_data_storage_common)
> GEN_COMMON h_data_storage
> addi r3,r1,STACK_FRAME_OVERHEAD
> BEGIN_MMU_FTR_SECTION
> - ld r4,_DAR(r1)
> - li r5,SIGSEGV
> - bl bad_page_fault
> + bl do_bad_page_fault_segv
> MMU_FTR_SECTION_ELSE
> bl unknown_exception
> ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_TYPE_RADIX)
> diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
> index 3ec7b443fe6b..f3f6af3141ee 100644
> --- a/arch/powerpc/kernel/traps.c
> +++ b/arch/powerpc/kernel/traps.c
> @@ -1612,7 +1612,7 @@ void alignment_exception(struct pt_regs *regs)
> if (user_mode(regs))
> _exception(sig, regs, code, regs->dar);
> else
> - bad_page_fault(regs, regs->dar, sig);
> + bad_page_fault(regs, sig);
>
> bail:
> exception_exit(prev_state);
> diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
> index 9a499af3eebf..1a270cc37d97 100644
> --- a/arch/powerpc/mm/book3s64/hash_utils.c
> +++ b/arch/powerpc/mm/book3s64/hash_utils.c
> @@ -1539,7 +1539,7 @@ long do_hash_fault(struct pt_regs *regs)
> * the access, or panic if there isn't a handler.
> */
> if (unlikely(in_nmi())) {
> - bad_page_fault(regs, ea, SIGSEGV);
> + bad_page_fault(regs, SIGSEGV);
> return 0;
> }
>
> @@ -1578,7 +1578,7 @@ long do_hash_fault(struct pt_regs *regs)
> else
> _exception(SIGBUS, regs, BUS_ADRERR, ea);
> } else {
> - bad_page_fault(regs, ea, SIGBUS);
> + bad_page_fault(regs, SIGBUS);
> }
> err = 0;
>
> diff --git a/arch/powerpc/mm/book3s64/slb.c b/arch/powerpc/mm/book3s64/slb.c
> index 985902ce0272..c581548b533f 100644
> --- a/arch/powerpc/mm/book3s64/slb.c
> +++ b/arch/powerpc/mm/book3s64/slb.c
> @@ -874,7 +874,7 @@ void do_bad_slb_fault(struct pt_regs *regs)
> if (user_mode(regs))
> _exception(SIGSEGV, regs, SEGV_BNDERR, regs->dar);
> else
> - bad_page_fault(regs, regs->dar, SIGSEGV);
> + bad_page_fault(regs, SIGSEGV);
> } else if (err == -EINVAL) {
> unrecoverable_exception(regs);
> } else {
> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> index 273ff845eccf..e476d7701413 100644
> --- a/arch/powerpc/mm/fault.c
> +++ b/arch/powerpc/mm/fault.c
> @@ -566,7 +566,7 @@ NOKPROBE_SYMBOL(do_page_fault);
> * It is called from the DSI and ISI handlers in head.S and from some
> * of the procedures in traps.c.
> */
> -void __bad_page_fault(struct pt_regs *regs, unsigned long address, int sig)
> +void __bad_page_fault(struct pt_regs *regs, int sig)
> {
> int is_write = page_fault_is_write(regs->dsisr);
>
> @@ -604,7 +604,7 @@ void __bad_page_fault(struct pt_regs *regs, unsigned long address, int sig)
> die("Kernel access of bad area", regs, sig);
> }
>
> -void bad_page_fault(struct pt_regs *regs, unsigned long address, int sig)
> +void bad_page_fault(struct pt_regs *regs, int sig)
> {
> const struct exception_table_entry *entry;
>
> @@ -613,5 +613,12 @@ void bad_page_fault(struct pt_regs *regs, unsigned long address, int sig)
> if (entry)
> instruction_pointer_set(regs, extable_fixup(entry));
> else
> - __bad_page_fault(regs, address, sig);
> + __bad_page_fault(regs, sig);
> }
> +
> +#ifdef CONFIG_PPC_BOOK3S_64
> +void do_bad_page_fault_segv(struct pt_regs *regs)
> +{
> + bad_page_fault(regs, SIGSEGV);
> +}
> +#endif
> diff --git a/arch/powerpc/platforms/8xx/machine_check.c b/arch/powerpc/platforms/8xx/machine_check.c
> index 88dedf38eccd..656365975895 100644
> --- a/arch/powerpc/platforms/8xx/machine_check.c
> +++ b/arch/powerpc/platforms/8xx/machine_check.c
> @@ -26,7 +26,7 @@ int machine_check_8xx(struct pt_regs *regs)
> * to deal with that than having a wart in the mcheck handler.
> * -- BenH
> */
> - bad_page_fault(regs, regs->dar, SIGBUS);
> + bad_page_fault(regs, SIGBUS);
> return 1;
> #else
> return 0;
>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v6 17/39] powerpc/fsl_booke/32: CacheLockingException remove args
2021-01-15 16:49 ` [PATCH v6 17/39] powerpc/fsl_booke/32: CacheLockingException remove args Nicholas Piggin
@ 2021-01-15 17:14 ` Christophe Leroy
2021-01-16 0:43 ` Nicholas Piggin
0 siblings, 1 reply; 56+ messages in thread
From: Christophe Leroy @ 2021-01-15 17:14 UTC (permalink / raw)
To: Nicholas Piggin, linuxppc-dev
Le 15/01/2021 à 17:49, Nicholas Piggin a écrit :
> Like other interrupt handler conversions, switch to getting registers
> from the pt_regs argument.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> arch/powerpc/kernel/head_fsl_booke.S | 6 +++---
> arch/powerpc/kernel/traps.c | 5 +++--
> 2 files changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
> index fdd4d274c245..0d4d9a6fcca1 100644
> --- a/arch/powerpc/kernel/head_fsl_booke.S
> +++ b/arch/powerpc/kernel/head_fsl_booke.S
> @@ -364,12 +364,12 @@ interrupt_base:
> /* Data Storage Interrupt */
> START_EXCEPTION(DataStorage)
> NORMAL_EXCEPTION_PROLOG(DATA_STORAGE)
> - mfspr r5,SPRN_ESR /* Grab the ESR, save it, pass arg3 */
> + mfspr r5,SPRN_ESR /* Grab the ESR, save it3 */
> stw r5,_ESR(r11)
> - mfspr r4,SPRN_DEAR /* Grab the DEAR, save it, pass arg2 */
> + mfspr r4,SPRN_DEAR /* Grab the DEAR, save it */
> + stw r4, _DEAR(r11)
> andis. r10,r5,(ESR_ILK|ESR_DLK)@h
> bne 1f
> - stw r4, _DEAR(r11)
> EXC_XFER_LITE(0x0300, handle_page_fault)
> 1:
> addi r3,r1,STACK_FRAME_OVERHEAD
Why isn't the above done in patch 5 ?
> diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
> index 639bcafbad5e..1af52a4bce1f 100644
> --- a/arch/powerpc/kernel/traps.c
> +++ b/arch/powerpc/kernel/traps.c
> @@ -2105,9 +2105,10 @@ void altivec_assist_exception(struct pt_regs *regs)
> #endif /* CONFIG_ALTIVEC */
>
> #ifdef CONFIG_FSL_BOOKE
> -void CacheLockingException(struct pt_regs *regs, unsigned long address,
> - unsigned long error_code)
> +void CacheLockingException(struct pt_regs *regs)
> {
> + unsigned long error_code = regs->dsisr;
> +
> /* We treat cache locking instructions from the user
> * as priv ops, in the future we could try to do
> * something smarter
>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v6 07/39] powerpc: bad_page_fault get registers from regs
2021-01-15 17:09 ` Christophe Leroy
@ 2021-01-16 0:42 ` Nicholas Piggin
0 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-16 0:42 UTC (permalink / raw)
To: Christophe Leroy, linuxppc-dev
Excerpts from Christophe Leroy's message of January 16, 2021 3:09 am:
>
>
> Le 15/01/2021 à 17:49, Nicholas Piggin a écrit :
>> Similar to the previous patch this makes interrupt handler function
>> types more regular so they can be wrapped with the next patch.
>>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>> arch/powerpc/include/asm/bug.h | 5 +++--
>> arch/powerpc/kernel/entry_32.S | 3 +--
>> arch/powerpc/kernel/exceptions-64e.S | 3 +--
>> arch/powerpc/kernel/exceptions-64s.S | 4 +---
>> arch/powerpc/kernel/traps.c | 2 +-
>> arch/powerpc/mm/book3s64/hash_utils.c | 4 ++--
>> arch/powerpc/mm/book3s64/slb.c | 2 +-
>> arch/powerpc/mm/fault.c | 13 ++++++++++---
>> arch/powerpc/platforms/8xx/machine_check.c | 2 +-
>> 9 files changed, 21 insertions(+), 17 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/bug.h b/arch/powerpc/include/asm/bug.h
>> index f7827e993196..8f09ddae9305 100644
>> --- a/arch/powerpc/include/asm/bug.h
>> +++ b/arch/powerpc/include/asm/bug.h
>> @@ -112,8 +112,9 @@
>>
>> struct pt_regs;
>> long do_page_fault(struct pt_regs *);
>> -extern void bad_page_fault(struct pt_regs *, unsigned long, int);
>> -void __bad_page_fault(struct pt_regs *regs, unsigned long address, int sig);
>> +void bad_page_fault(struct pt_regs *, int);
>> +void __bad_page_fault(struct pt_regs *regs, int sig);
>> +void do_bad_page_fault_segv(struct pt_regs *regs);
>
> What is that do_bad_page_fault_segv() ? Shouldn't it be in a separate patch ?
Hmm, yeah probably. It will be an interrupt handler (that doesn't
require SIGSEGV argument).
Thanks,
Nick
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v6 17/39] powerpc/fsl_booke/32: CacheLockingException remove args
2021-01-15 17:14 ` Christophe Leroy
@ 2021-01-16 0:43 ` Nicholas Piggin
2021-01-16 7:38 ` Christophe Leroy
0 siblings, 1 reply; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-16 0:43 UTC (permalink / raw)
To: Christophe Leroy, linuxppc-dev
Excerpts from Christophe Leroy's message of January 16, 2021 3:14 am:
>
>
> Le 15/01/2021 à 17:49, Nicholas Piggin a écrit :
>> Like other interrupt handler conversions, switch to getting registers
>> from the pt_regs argument.
>>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>> arch/powerpc/kernel/head_fsl_booke.S | 6 +++---
>> arch/powerpc/kernel/traps.c | 5 +++--
>> 2 files changed, 6 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
>> index fdd4d274c245..0d4d9a6fcca1 100644
>> --- a/arch/powerpc/kernel/head_fsl_booke.S
>> +++ b/arch/powerpc/kernel/head_fsl_booke.S
>> @@ -364,12 +364,12 @@ interrupt_base:
>> /* Data Storage Interrupt */
>> START_EXCEPTION(DataStorage)
>> NORMAL_EXCEPTION_PROLOG(DATA_STORAGE)
>> - mfspr r5,SPRN_ESR /* Grab the ESR, save it, pass arg3 */
>> + mfspr r5,SPRN_ESR /* Grab the ESR, save it3 */
>> stw r5,_ESR(r11)
>> - mfspr r4,SPRN_DEAR /* Grab the DEAR, save it, pass arg2 */
>> + mfspr r4,SPRN_DEAR /* Grab the DEAR, save it */
>> + stw r4, _DEAR(r11)
>> andis. r10,r5,(ESR_ILK|ESR_DLK)@h
>> bne 1f
>> - stw r4, _DEAR(r11)
>> EXC_XFER_LITE(0x0300, handle_page_fault)
>> 1:
>> addi r3,r1,STACK_FRAME_OVERHEAD
>
> Why isn't the above done in patch 5 ?
I don't think it's required there, is it?
Thanks,
Nick
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v6 17/39] powerpc/fsl_booke/32: CacheLockingException remove args
2021-01-16 0:43 ` Nicholas Piggin
@ 2021-01-16 7:38 ` Christophe Leroy
2021-01-16 10:34 ` Nicholas Piggin
0 siblings, 1 reply; 56+ messages in thread
From: Christophe Leroy @ 2021-01-16 7:38 UTC (permalink / raw)
To: Nicholas Piggin, linuxppc-dev
Le 16/01/2021 à 01:43, Nicholas Piggin a écrit :
> Excerpts from Christophe Leroy's message of January 16, 2021 3:14 am:
>>
>>
>> Le 15/01/2021 à 17:49, Nicholas Piggin a écrit :
>>> Like other interrupt handler conversions, switch to getting registers
>>> from the pt_regs argument.
>>>
>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>> ---
>>> arch/powerpc/kernel/head_fsl_booke.S | 6 +++---
>>> arch/powerpc/kernel/traps.c | 5 +++--
>>> 2 files changed, 6 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
>>> index fdd4d274c245..0d4d9a6fcca1 100644
>>> --- a/arch/powerpc/kernel/head_fsl_booke.S
>>> +++ b/arch/powerpc/kernel/head_fsl_booke.S
>>> @@ -364,12 +364,12 @@ interrupt_base:
>>> /* Data Storage Interrupt */
>>> START_EXCEPTION(DataStorage)
>>> NORMAL_EXCEPTION_PROLOG(DATA_STORAGE)
>>> - mfspr r5,SPRN_ESR /* Grab the ESR, save it, pass arg3 */
>>> + mfspr r5,SPRN_ESR /* Grab the ESR, save it3 */
>>> stw r5,_ESR(r11)
>>> - mfspr r4,SPRN_DEAR /* Grab the DEAR, save it, pass arg2 */
>>> + mfspr r4,SPRN_DEAR /* Grab the DEAR, save it */
>>> + stw r4, _DEAR(r11)
>>> andis. r10,r5,(ESR_ILK|ESR_DLK)@h
>>> bne 1f
>>> - stw r4, _DEAR(r11)
>>> EXC_XFER_LITE(0x0300, handle_page_fault)
>>> 1:
>>> addi r3,r1,STACK_FRAME_OVERHEAD
>>
>> Why isn't the above done in patch 5 ?
>
> I don't think it's required there, is it?
Ah yes, moving the 'stw' is needed only here.
But the comments changes belong to patch 5, you have done exactly similar changes there in
kernel/head_40x.S
By the way, I think patch 17 could immediately follow patch 5 and patch 18 could follow patch 6.
>
> Thanks,
> Nick
>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v6 17/39] powerpc/fsl_booke/32: CacheLockingException remove args
2021-01-16 7:38 ` Christophe Leroy
@ 2021-01-16 10:34 ` Nicholas Piggin
0 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-16 10:34 UTC (permalink / raw)
To: Christophe Leroy, linuxppc-dev
Excerpts from Christophe Leroy's message of January 16, 2021 5:38 pm:
>
>
> Le 16/01/2021 à 01:43, Nicholas Piggin a écrit :
>> Excerpts from Christophe Leroy's message of January 16, 2021 3:14 am:
>>>
>>>
>>> Le 15/01/2021 à 17:49, Nicholas Piggin a écrit :
>>>> Like other interrupt handler conversions, switch to getting registers
>>>> from the pt_regs argument.
>>>>
>>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>>> ---
>>>> arch/powerpc/kernel/head_fsl_booke.S | 6 +++---
>>>> arch/powerpc/kernel/traps.c | 5 +++--
>>>> 2 files changed, 6 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
>>>> index fdd4d274c245..0d4d9a6fcca1 100644
>>>> --- a/arch/powerpc/kernel/head_fsl_booke.S
>>>> +++ b/arch/powerpc/kernel/head_fsl_booke.S
>>>> @@ -364,12 +364,12 @@ interrupt_base:
>>>> /* Data Storage Interrupt */
>>>> START_EXCEPTION(DataStorage)
>>>> NORMAL_EXCEPTION_PROLOG(DATA_STORAGE)
>>>> - mfspr r5,SPRN_ESR /* Grab the ESR, save it, pass arg3 */
>>>> + mfspr r5,SPRN_ESR /* Grab the ESR, save it3 */
>>>> stw r5,_ESR(r11)
>>>> - mfspr r4,SPRN_DEAR /* Grab the DEAR, save it, pass arg2 */
>>>> + mfspr r4,SPRN_DEAR /* Grab the DEAR, save it */
>>>> + stw r4, _DEAR(r11)
>>>> andis. r10,r5,(ESR_ILK|ESR_DLK)@h
>>>> bne 1f
>>>> - stw r4, _DEAR(r11)
>>>> EXC_XFER_LITE(0x0300, handle_page_fault)
>>>> 1:
>>>> addi r3,r1,STACK_FRAME_OVERHEAD
>>>
>>> Why isn't the above done in patch 5 ?
>>
>> I don't think it's required there, is it?
>
> Ah yes, moving the 'stw' is needed only here.
>
> But the comments changes belong to patch 5, you have done exactly similar changes there in
> kernel/head_40x.S
>
> By the way, I think patch 17 could immediately follow patch 5 and patch 18 could follow patch 6.
I can probably do all these. I'll wait a couple of days and check if
Michael will merge the series before sending an update for small
changes.
Thanks,
Nick
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v6 01/39] KVM: PPC: Book3S HV: Context tracking exit guest context before enabling irqs
2021-01-15 16:49 ` [PATCH v6 01/39] KVM: PPC: Book3S HV: Context tracking exit guest context before enabling irqs Nicholas Piggin
@ 2021-01-16 10:38 ` Nicholas Piggin
0 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-16 10:38 UTC (permalink / raw)
To: linuxppc-dev; +Cc: kvm-ppc
Excerpts from Nicholas Piggin's message of January 16, 2021 2:49 am:
> Interrupts that occur in kernel mode expect that context tracking
> is set to kernel. Enabling local irqs before context tracking
> switches from guest to host means interrupts can come in and trigger
> warnings about wrong context, and possibly worse.
I think this is not actually a fix per se with context as it is today
because the interrupt handlers will save and update the state. It only
starts throwing warnings when moving to the more precise tracking
where kernel interrupts always expect context to be in kernel mode.
The patch stands on its own just fine, but I'll reword slightly and
move it in the series to where it's needed.
Thanks,
Nick
>
> Cc: kvm-ppc@vger.kernel.org
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> arch/powerpc/kvm/book3s_hv.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 6f612d240392..d348e77cee20 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -3407,8 +3407,9 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
>
> kvmppc_set_host_core(pcpu);
>
> + guest_exit_irqoff();
> +
> local_irq_enable();
> - guest_exit();
>
> /* Let secondaries go back to the offline loop */
> for (i = 0; i < controlled_threads; ++i) {
> @@ -4217,8 +4218,9 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
>
> kvmppc_set_host_core(pcpu);
>
> + guest_exit_irqoff();
> +
> local_irq_enable();
> - guest_exit();
>
> cpumask_clear_cpu(pcpu, &kvm->arch.cpu_in_guest);
>
> --
> 2.23.0
>
>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v6 14/39] powerpc/perf: move perf irq/nmi handling details into traps.c
2021-01-15 16:49 ` [PATCH v6 14/39] powerpc/perf: move perf irq/nmi handling details into traps.c Nicholas Piggin
@ 2021-01-19 10:24 ` Athira Rajeev
2021-01-20 3:09 ` Nicholas Piggin
0 siblings, 1 reply; 56+ messages in thread
From: Athira Rajeev @ 2021-01-19 10:24 UTC (permalink / raw)
To: Nicholas Piggin; +Cc: linuxppc-dev
> On 15-Jan-2021, at 10:19 PM, Nicholas Piggin <npiggin@gmail.com> wrote:
>
> This is required in order to allow more significant differences between
> NMI type interrupt handlers and regular asynchronous handlers.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> arch/powerpc/kernel/traps.c | 31 +++++++++++++++++++++++++++-
> arch/powerpc/perf/core-book3s.c | 35 ++------------------------------
> arch/powerpc/perf/core-fsl-emb.c | 25 -----------------------
> 3 files changed, 32 insertions(+), 59 deletions(-)
Hi Nick,
Reviewed this perf patch which moves the nmi_enter/irq_enter to traps.c and code-wise changes
for perf looks fine to me. Further, I was trying to test this by picking the whole Patch series on top
of 5.11.0-rc3 kernel and using below test scenario:
Intention of testcase is to check whether the perf nmi and asynchronous interrupts are getting
captured as expected. My test kernel module below tries to create one of performance monitor
counter ( PMC6 ) overflow between local_irq_save/local_irq_restore.
[ Here interrupts are disabled and has IRQS_DISABLED as regs->softe ].
I am expecting the PMI to come as an NMI in this case. I am also configuring ftrace so that I
can confirm whether it comes as an NMI or a replayed interrupt from the trace.
Environment :One CPU online
prerequisite for ftrace:
# cd /sys/kernel/debug/tracing
# echo 100 > buffer_percent
# echo 200000 > buffer_size_kb
# echo ppc-tb > trace_clock
# echo function > current_tracer
Part of sample kernel test module to trigger a PMI between
local_irq_save and local_irq_restore:
<<>>
static ulong delay = 1;
static void busy_wait(ulong time)
{
udelay(delay);
}
static __always_inline void irq_test(void)
{
unsigned long flags;
local_irq_save(flags);
trace_printk("IN IRQ TEST\n");
mtspr(SPRN_MMCR0, 0x80000000);
mtspr(SPRN_PMC6, 0x80000000 - 100);
mtspr(SPRN_MMCR0, 0x6004000);
busy_wait(delay);
trace_printk("IN IRQ TEST DONE\n");
local_irq_restore(flags);
mtspr(SPRN_MMCR0, 0x80000000);
mtspr(SPRN_PMC6, 0);
}
<<>>
But this resulted in soft lockup, Adding a snippet of call-trace below:
[ 883.900762] watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [swapper/0:0]
[ 883.901381] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G OE 5.11.0-rc3+ #34
--
[ 883.901999] NIP [c0000000000168d0] replay_soft_interrupts+0x70/0x2f0
[ 883.902032] LR [c00000000003b2b8] interrupt_exit_kernel_prepare+0x1e8/0x240
[ 883.902063] Call Trace:
[ 883.902085] [c000000001c96f50] [c00000000003b2b8] interrupt_exit_kernel_prepare+0x1e8/0x240 (unreliable)
[ 883.902139] [c000000001c96fb0] [c00000000000fd88] interrupt_return+0x158/0x200
[ 883.902185] --- interrupt: ea0 at __rb_reserve_next+0xc0/0x5b0
[ 883.902224] NIP: c0000000002d8980 LR: c0000000002d897c CTR: c0000000001aad90
[ 883.902262] REGS: c000000001c97020 TRAP: 0ea0 Tainted: G OE (5.11.0-rc3+)
[ 883.902301] MSR: 9000000000009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 28000484 XER: 20040000
[ 883.902387] CFAR: c00000000000fe00 IRQMASK: 0
--
[ 883.902757] NIP [c0000000002d8980] __rb_reserve_next+0xc0/0x5b0
[ 883.902786] LR [c0000000002d897c] __rb_reserve_next+0xbc/0x5b0
[ 883.902824] --- interrupt: ea0
[ 883.902848] [c000000001c97360] [c0000000002d8fcc] ring_buffer_lock_reserve+0x15c/0x580
[ 883.902894] [c000000001c973f0] [c0000000002e82fc] trace_function+0x4c/0x1c0
[ 883.902930] [c000000001c97440] [c0000000002f6f50] function_trace_call+0x140/0x190
[ 883.902976] [c000000001c97470] [c00000000007d6f8] ftrace_call+0x4/0x44
[ 883.903021] [c000000001c97660] [c000000000dcf70c] __do_softirq+0x15c/0x3d4
[ 883.903066] [c000000001c97750] [c00000000015fc68] irq_exit+0x198/0x1b0
[ 883.903102] [c000000001c97780] [c000000000dc1790] timer_interrupt+0x170/0x3b0
[ 883.903148] [c000000001c977e0] [c000000000016994] replay_soft_interrupts+0x134/0x2f0
[ 883.903193] [c000000001c979d0] [c00000000003b2b8] interrupt_exit_kernel_prepare+0x1e8/0x240
[ 883.903240] [c000000001c97a30] [c00000000000fd88] interrupt_return+0x158/0x200
[ 883.903276] --- interrupt: ea0 at arch_local_irq_restore+0x70/0xc0
Thanks
Athira
>
> diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
> index 738370519937..bd55f201115b 100644
> --- a/arch/powerpc/kernel/traps.c
> +++ b/arch/powerpc/kernel/traps.c
> @@ -1892,11 +1892,40 @@ void vsx_unavailable_tm(struct pt_regs *regs)
> }
> #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
>
> -void performance_monitor_exception(struct pt_regs *regs)
> +static void performance_monitor_exception_nmi(struct pt_regs *regs)
> +{
> + nmi_enter();
> +
> + __this_cpu_inc(irq_stat.pmu_irqs);
> +
> + perf_irq(regs);
> +
> + nmi_exit();
> +}
> +
> +static void performance_monitor_exception_async(struct pt_regs *regs)
> {
> + irq_enter();
> +
> __this_cpu_inc(irq_stat.pmu_irqs);
>
> perf_irq(regs);
> +
> + irq_exit();
> +}
> +
> +void performance_monitor_exception(struct pt_regs *regs)
> +{
> + /*
> + * On 64-bit, if perf interrupts hit in a local_irq_disable
> + * (soft-masked) region, we consider them as NMIs. This is required to
> + * prevent hash faults on user addresses when reading callchains (and
> + * looks better from an irq tracing perspective).
> + */
> + if (IS_ENABLED(CONFIG_PPC64) && unlikely(arch_irq_disabled_regs(regs)))
> + performance_monitor_exception_nmi(regs);
> + else
> + performance_monitor_exception_async(regs);
> }
>
> #ifdef CONFIG_PPC_ADV_DEBUG_REGS
> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
> index 28206b1fe172..9fd06010e8b6 100644
> --- a/arch/powerpc/perf/core-book3s.c
> +++ b/arch/powerpc/perf/core-book3s.c
> @@ -110,10 +110,6 @@ static inline void perf_read_regs(struct pt_regs *regs)
> {
> regs->result = 0;
> }
> -static inline int perf_intr_is_nmi(struct pt_regs *regs)
> -{
> - return 0;
> -}
>
> static inline int siar_valid(struct pt_regs *regs)
> {
> @@ -353,15 +349,6 @@ static inline void perf_read_regs(struct pt_regs *regs)
> regs->result = use_siar;
> }
>
> -/*
> - * If interrupts were soft-disabled when a PMU interrupt occurs, treat
> - * it as an NMI.
> - */
> -static inline int perf_intr_is_nmi(struct pt_regs *regs)
> -{
> - return (regs->softe & IRQS_DISABLED);
> -}
> -
> /*
> * On processors like P7+ that have the SIAR-Valid bit, marked instructions
> * must be sampled only if the SIAR-valid bit is set.
> @@ -2279,7 +2266,6 @@ static void __perf_event_interrupt(struct pt_regs *regs)
> struct perf_event *event;
> unsigned long val[8];
> int found, active;
> - int nmi;
>
> if (cpuhw->n_limited)
> freeze_limited_counters(cpuhw, mfspr(SPRN_PMC5),
> @@ -2287,18 +2273,6 @@ static void __perf_event_interrupt(struct pt_regs *regs)
>
> perf_read_regs(regs);
>
> - /*
> - * If perf interrupts hit in a local_irq_disable (soft-masked) region,
> - * we consider them as NMIs. This is required to prevent hash faults on
> - * user addresses when reading callchains. See the NMI test in
> - * do_hash_page.
> - */
> - nmi = perf_intr_is_nmi(regs);
> - if (nmi)
> - nmi_enter();
> - else
> - irq_enter();
> -
> /* Read all the PMCs since we'll need them a bunch of times */
> for (i = 0; i < ppmu->n_counter; ++i)
> val[i] = read_pmc(i + 1);
> @@ -2344,8 +2318,8 @@ static void __perf_event_interrupt(struct pt_regs *regs)
> }
> }
> }
> - if (!found && !nmi && printk_ratelimit())
> - printk(KERN_WARNING "Can't find PMC that caused IRQ\n");
> + if (unlikely(!found) && !arch_irq_disabled_regs(regs))
> + printk_ratelimited(KERN_WARNING "Can't find PMC that caused IRQ\n");
>
> /*
> * Reset MMCR0 to its normal value. This will set PMXE and
> @@ -2355,11 +2329,6 @@ static void __perf_event_interrupt(struct pt_regs *regs)
> * we get back out of this interrupt.
> */
> write_mmcr0(cpuhw, cpuhw->mmcr.mmcr0);
> -
> - if (nmi)
> - nmi_exit();
> - else
> - irq_exit();
> }
>
> static void perf_event_interrupt(struct pt_regs *regs)
> diff --git a/arch/powerpc/perf/core-fsl-emb.c b/arch/powerpc/perf/core-fsl-emb.c
> index e0e7e276bfd2..ee721f420a7b 100644
> --- a/arch/powerpc/perf/core-fsl-emb.c
> +++ b/arch/powerpc/perf/core-fsl-emb.c
> @@ -31,19 +31,6 @@ static atomic_t num_events;
> /* Used to avoid races in calling reserve/release_pmc_hardware */
> static DEFINE_MUTEX(pmc_reserve_mutex);
>
> -/*
> - * If interrupts were soft-disabled when a PMU interrupt occurs, treat
> - * it as an NMI.
> - */
> -static inline int perf_intr_is_nmi(struct pt_regs *regs)
> -{
> -#ifdef __powerpc64__
> - return (regs->softe & IRQS_DISABLED);
> -#else
> - return 0;
> -#endif
> -}
> -
> static void perf_event_interrupt(struct pt_regs *regs);
>
> /*
> @@ -659,13 +646,6 @@ static void perf_event_interrupt(struct pt_regs *regs)
> struct perf_event *event;
> unsigned long val;
> int found = 0;
> - int nmi;
> -
> - nmi = perf_intr_is_nmi(regs);
> - if (nmi)
> - nmi_enter();
> - else
> - irq_enter();
>
> for (i = 0; i < ppmu->n_counter; ++i) {
> event = cpuhw->event[i];
> @@ -690,11 +670,6 @@ static void perf_event_interrupt(struct pt_regs *regs)
> mtmsr(mfmsr() | MSR_PMM);
> mtpmr(PMRN_PMGC0, PMGC0_PMIE | PMGC0_FCECE);
> isync();
> -
> - if (nmi)
> - nmi_exit();
> - else
> - irq_exit();
> }
>
> void hw_perf_event_setup(int cpu)
> --
> 2.23.0
>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v6 14/39] powerpc/perf: move perf irq/nmi handling details into traps.c
2021-01-19 10:24 ` Athira Rajeev
@ 2021-01-20 3:09 ` Nicholas Piggin
2021-01-20 4:21 ` Nicholas Piggin
2021-01-27 5:49 ` Athira Rajeev
0 siblings, 2 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-20 3:09 UTC (permalink / raw)
To: Athira Rajeev; +Cc: linuxppc-dev
Excerpts from Athira Rajeev's message of January 19, 2021 8:24 pm:
>
>
>> On 15-Jan-2021, at 10:19 PM, Nicholas Piggin <npiggin@gmail.com> wrote:
>>
>> This is required in order to allow more significant differences between
>> NMI type interrupt handlers and regular asynchronous handlers.
>>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>> arch/powerpc/kernel/traps.c | 31 +++++++++++++++++++++++++++-
>> arch/powerpc/perf/core-book3s.c | 35 ++------------------------------
>> arch/powerpc/perf/core-fsl-emb.c | 25 -----------------------
>> 3 files changed, 32 insertions(+), 59 deletions(-)
>
> Hi Nick,
>
> Reviewed this perf patch which moves the nmi_enter/irq_enter to traps.c and code-wise changes
> for perf looks fine to me. Further, I was trying to test this by picking the whole Patch series on top
> of 5.11.0-rc3 kernel and using below test scenario:
>
> Intention of testcase is to check whether the perf nmi and asynchronous interrupts are getting
> captured as expected. My test kernel module below tries to create one of performance monitor
> counter ( PMC6 ) overflow between local_irq_save/local_irq_restore.
> [ Here interrupts are disabled and has IRQS_DISABLED as regs->softe ].
> I am expecting the PMI to come as an NMI in this case. I am also configuring ftrace so that I
> can confirm whether it comes as an NMI or a replayed interrupt from the trace.
>
> Environment :One CPU online
> prerequisite for ftrace:
> # cd /sys/kernel/debug/tracing
> # echo 100 > buffer_percent
> # echo 200000 > buffer_size_kb
> # echo ppc-tb > trace_clock
> # echo function > current_tracer
>
> Part of sample kernel test module to trigger a PMI between
> local_irq_save and local_irq_restore:
>
> <<>>
> static ulong delay = 1;
> static void busy_wait(ulong time)
> {
> udelay(delay);
> }
> static __always_inline void irq_test(void)
> {
> unsigned long flags;
> local_irq_save(flags);
> trace_printk("IN IRQ TEST\n");
> mtspr(SPRN_MMCR0, 0x80000000);
> mtspr(SPRN_PMC6, 0x80000000 - 100);
> mtspr(SPRN_MMCR0, 0x6004000);
> busy_wait(delay);
> trace_printk("IN IRQ TEST DONE\n");
> local_irq_restore(flags);
> mtspr(SPRN_MMCR0, 0x80000000);
> mtspr(SPRN_PMC6, 0);
> }
> <<>>
>
> But this resulted in soft lockup, Adding a snippet of call-trace below:
I'm not getting problems with your test case, but I am testing in a VM
so may not be getting device interrupts so much (your 0xea0 interrupt).
I'll try test on bare metal next. Does it reproduce easily, and
unpatched kernel definitely does not have the problem?
A different issue, after my series, I don't see the perf "NMI" interrupt
in any traces under local_irq_disable, because it's disabling ftrace the
same as the other NMI interrupts, so your test wouldn't see them.
I don't know if this is exactly right. Can tracing cope with such NMIs
okay even if it's interrupted in the middle of the tracing code? Machine
check at least has to disable tracing because it's in real-mode, machine
check and sreset also want to disable tracing because something is going
wrong and we don't want to make it worse (e.g., to get a cleaner crash).
Should we still permit tracing of perf NMIs?
>
> [ 883.900762] watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [swapper/0:0]
> [ 883.901381] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G OE 5.11.0-rc3+ #34
> --
> [ 883.901999] NIP [c0000000000168d0] replay_soft_interrupts+0x70/0x2f0
> [ 883.902032] LR [c00000000003b2b8] interrupt_exit_kernel_prepare+0x1e8/0x240
> [ 883.902063] Call Trace:
> [ 883.902085] [c000000001c96f50] [c00000000003b2b8] interrupt_exit_kernel_prepare+0x1e8/0x240 (unreliable)
> [ 883.902139] [c000000001c96fb0] [c00000000000fd88] interrupt_return+0x158/0x200
> [ 883.902185] --- interrupt: ea0 at __rb_reserve_next+0xc0/0x5b0
> [ 883.902224] NIP: c0000000002d8980 LR: c0000000002d897c CTR: c0000000001aad90
> [ 883.902262] REGS: c000000001c97020 TRAP: 0ea0 Tainted: G OE (5.11.0-rc3+)
> [ 883.902301] MSR: 9000000000009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 28000484 XER: 20040000
> [ 883.902387] CFAR: c00000000000fe00 IRQMASK: 0
> --
> [ 883.902757] NIP [c0000000002d8980] __rb_reserve_next+0xc0/0x5b0
> [ 883.902786] LR [c0000000002d897c] __rb_reserve_next+0xbc/0x5b0
> [ 883.902824] --- interrupt: ea0
> [ 883.902848] [c000000001c97360] [c0000000002d8fcc] ring_buffer_lock_reserve+0x15c/0x580
> [ 883.902894] [c000000001c973f0] [c0000000002e82fc] trace_function+0x4c/0x1c0
> [ 883.902930] [c000000001c97440] [c0000000002f6f50] function_trace_call+0x140/0x190
> [ 883.902976] [c000000001c97470] [c00000000007d6f8] ftrace_call+0x4/0x44
> [ 883.903021] [c000000001c97660] [c000000000dcf70c] __do_softirq+0x15c/0x3d4
> [ 883.903066] [c000000001c97750] [c00000000015fc68] irq_exit+0x198/0x1b0
> [ 883.903102] [c000000001c97780] [c000000000dc1790] timer_interrupt+0x170/0x3b0
> [ 883.903148] [c000000001c977e0] [c000000000016994] replay_soft_interrupts+0x134/0x2f0
> [ 883.903193] [c000000001c979d0] [c00000000003b2b8] interrupt_exit_kernel_prepare+0x1e8/0x240
> [ 883.903240] [c000000001c97a30] [c00000000000fd88] interrupt_return+0x158/0x200
> [ 883.903276] --- interrupt: ea0 at arch_local_irq_restore+0x70/0xc0
You got a 0xea0 interrupt in the ftrace code. I wonder where it is
looping. Do you see more soft lockup messages?
Thanks,
Nick
>
> Thanks
> Athira
>>
>> diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
>> index 738370519937..bd55f201115b 100644
>> --- a/arch/powerpc/kernel/traps.c
>> +++ b/arch/powerpc/kernel/traps.c
>> @@ -1892,11 +1892,40 @@ void vsx_unavailable_tm(struct pt_regs *regs)
>> }
>> #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
>>
>> -void performance_monitor_exception(struct pt_regs *regs)
>> +static void performance_monitor_exception_nmi(struct pt_regs *regs)
>> +{
>> + nmi_enter();
>> +
>> + __this_cpu_inc(irq_stat.pmu_irqs);
>> +
>> + perf_irq(regs);
>> +
>> + nmi_exit();
>> +}
>> +
>> +static void performance_monitor_exception_async(struct pt_regs *regs)
>> {
>> + irq_enter();
>> +
>> __this_cpu_inc(irq_stat.pmu_irqs);
>>
>> perf_irq(regs);
>> +
>> + irq_exit();
>> +}
>> +
>> +void performance_monitor_exception(struct pt_regs *regs)
>> +{
>> + /*
>> + * On 64-bit, if perf interrupts hit in a local_irq_disable
>> + * (soft-masked) region, we consider them as NMIs. This is required to
>> + * prevent hash faults on user addresses when reading callchains (and
>> + * looks better from an irq tracing perspective).
>> + */
>> + if (IS_ENABLED(CONFIG_PPC64) && unlikely(arch_irq_disabled_regs(regs)))
>> + performance_monitor_exception_nmi(regs);
>> + else
>> + performance_monitor_exception_async(regs);
>> }
>>
>> #ifdef CONFIG_PPC_ADV_DEBUG_REGS
>> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
>> index 28206b1fe172..9fd06010e8b6 100644
>> --- a/arch/powerpc/perf/core-book3s.c
>> +++ b/arch/powerpc/perf/core-book3s.c
>> @@ -110,10 +110,6 @@ static inline void perf_read_regs(struct pt_regs *regs)
>> {
>> regs->result = 0;
>> }
>> -static inline int perf_intr_is_nmi(struct pt_regs *regs)
>> -{
>> - return 0;
>> -}
>>
>> static inline int siar_valid(struct pt_regs *regs)
>> {
>> @@ -353,15 +349,6 @@ static inline void perf_read_regs(struct pt_regs *regs)
>> regs->result = use_siar;
>> }
>>
>> -/*
>> - * If interrupts were soft-disabled when a PMU interrupt occurs, treat
>> - * it as an NMI.
>> - */
>> -static inline int perf_intr_is_nmi(struct pt_regs *regs)
>> -{
>> - return (regs->softe & IRQS_DISABLED);
>> -}
>> -
>> /*
>> * On processors like P7+ that have the SIAR-Valid bit, marked instructions
>> * must be sampled only if the SIAR-valid bit is set.
>> @@ -2279,7 +2266,6 @@ static void __perf_event_interrupt(struct pt_regs *regs)
>> struct perf_event *event;
>> unsigned long val[8];
>> int found, active;
>> - int nmi;
>>
>> if (cpuhw->n_limited)
>> freeze_limited_counters(cpuhw, mfspr(SPRN_PMC5),
>> @@ -2287,18 +2273,6 @@ static void __perf_event_interrupt(struct pt_regs *regs)
>>
>> perf_read_regs(regs);
>>
>> - /*
>> - * If perf interrupts hit in a local_irq_disable (soft-masked) region,
>> - * we consider them as NMIs. This is required to prevent hash faults on
>> - * user addresses when reading callchains. See the NMI test in
>> - * do_hash_page.
>> - */
>> - nmi = perf_intr_is_nmi(regs);
>> - if (nmi)
>> - nmi_enter();
>> - else
>> - irq_enter();
>> -
>> /* Read all the PMCs since we'll need them a bunch of times */
>> for (i = 0; i < ppmu->n_counter; ++i)
>> val[i] = read_pmc(i + 1);
>> @@ -2344,8 +2318,8 @@ static void __perf_event_interrupt(struct pt_regs *regs)
>> }
>> }
>> }
>> - if (!found && !nmi && printk_ratelimit())
>> - printk(KERN_WARNING "Can't find PMC that caused IRQ\n");
>> + if (unlikely(!found) && !arch_irq_disabled_regs(regs))
>> + printk_ratelimited(KERN_WARNING "Can't find PMC that caused IRQ\n");
>>
>> /*
>> * Reset MMCR0 to its normal value. This will set PMXE and
>> @@ -2355,11 +2329,6 @@ static void __perf_event_interrupt(struct pt_regs *regs)
>> * we get back out of this interrupt.
>> */
>> write_mmcr0(cpuhw, cpuhw->mmcr.mmcr0);
>> -
>> - if (nmi)
>> - nmi_exit();
>> - else
>> - irq_exit();
>> }
>>
>> static void perf_event_interrupt(struct pt_regs *regs)
>> diff --git a/arch/powerpc/perf/core-fsl-emb.c b/arch/powerpc/perf/core-fsl-emb.c
>> index e0e7e276bfd2..ee721f420a7b 100644
>> --- a/arch/powerpc/perf/core-fsl-emb.c
>> +++ b/arch/powerpc/perf/core-fsl-emb.c
>> @@ -31,19 +31,6 @@ static atomic_t num_events;
>> /* Used to avoid races in calling reserve/release_pmc_hardware */
>> static DEFINE_MUTEX(pmc_reserve_mutex);
>>
>> -/*
>> - * If interrupts were soft-disabled when a PMU interrupt occurs, treat
>> - * it as an NMI.
>> - */
>> -static inline int perf_intr_is_nmi(struct pt_regs *regs)
>> -{
>> -#ifdef __powerpc64__
>> - return (regs->softe & IRQS_DISABLED);
>> -#else
>> - return 0;
>> -#endif
>> -}
>> -
>> static void perf_event_interrupt(struct pt_regs *regs);
>>
>> /*
>> @@ -659,13 +646,6 @@ static void perf_event_interrupt(struct pt_regs *regs)
>> struct perf_event *event;
>> unsigned long val;
>> int found = 0;
>> - int nmi;
>> -
>> - nmi = perf_intr_is_nmi(regs);
>> - if (nmi)
>> - nmi_enter();
>> - else
>> - irq_enter();
>>
>> for (i = 0; i < ppmu->n_counter; ++i) {
>> event = cpuhw->event[i];
>> @@ -690,11 +670,6 @@ static void perf_event_interrupt(struct pt_regs *regs)
>> mtmsr(mfmsr() | MSR_PMM);
>> mtpmr(PMRN_PMGC0, PMGC0_PMIE | PMGC0_FCECE);
>> isync();
>> -
>> - if (nmi)
>> - nmi_exit();
>> - else
>> - irq_exit();
>> }
>>
>> void hw_perf_event_setup(int cpu)
>> --
>> 2.23.0
>>
>
>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v6 14/39] powerpc/perf: move perf irq/nmi handling details into traps.c
2021-01-20 3:09 ` Nicholas Piggin
@ 2021-01-20 4:21 ` Nicholas Piggin
2021-01-27 5:49 ` Athira Rajeev
1 sibling, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-20 4:21 UTC (permalink / raw)
To: Athira Rajeev; +Cc: linuxppc-dev
Excerpts from Nicholas Piggin's message of January 20, 2021 1:09 pm:
> Excerpts from Athira Rajeev's message of January 19, 2021 8:24 pm:
>>
>> [ 883.900762] watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [swapper/0:0]
>> [ 883.901381] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G OE 5.11.0-rc3+ #34
>> --
>> [ 883.901999] NIP [c0000000000168d0] replay_soft_interrupts+0x70/0x2f0
>> [ 883.902032] LR [c00000000003b2b8] interrupt_exit_kernel_prepare+0x1e8/0x240
>> [ 883.902063] Call Trace:
>> [ 883.902085] [c000000001c96f50] [c00000000003b2b8] interrupt_exit_kernel_prepare+0x1e8/0x240 (unreliable)
>> [ 883.902139] [c000000001c96fb0] [c00000000000fd88] interrupt_return+0x158/0x200
>> [ 883.902185] --- interrupt: ea0 at __rb_reserve_next+0xc0/0x5b0
>> [ 883.902224] NIP: c0000000002d8980 LR: c0000000002d897c CTR: c0000000001aad90
>> [ 883.902262] REGS: c000000001c97020 TRAP: 0ea0 Tainted: G OE (5.11.0-rc3+)
>> [ 883.902301] MSR: 9000000000009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 28000484 XER: 20040000
>> [ 883.902387] CFAR: c00000000000fe00 IRQMASK: 0
>> --
>> [ 883.902757] NIP [c0000000002d8980] __rb_reserve_next+0xc0/0x5b0
>> [ 883.902786] LR [c0000000002d897c] __rb_reserve_next+0xbc/0x5b0
>> [ 883.902824] --- interrupt: ea0
>> [ 883.902848] [c000000001c97360] [c0000000002d8fcc] ring_buffer_lock_reserve+0x15c/0x580
>> [ 883.902894] [c000000001c973f0] [c0000000002e82fc] trace_function+0x4c/0x1c0
>> [ 883.902930] [c000000001c97440] [c0000000002f6f50] function_trace_call+0x140/0x190
>> [ 883.902976] [c000000001c97470] [c00000000007d6f8] ftrace_call+0x4/0x44
>> [ 883.903021] [c000000001c97660] [c000000000dcf70c] __do_softirq+0x15c/0x3d4
>> [ 883.903066] [c000000001c97750] [c00000000015fc68] irq_exit+0x198/0x1b0
>> [ 883.903102] [c000000001c97780] [c000000000dc1790] timer_interrupt+0x170/0x3b0
>> [ 883.903148] [c000000001c977e0] [c000000000016994] replay_soft_interrupts+0x134/0x2f0
>> [ 883.903193] [c000000001c979d0] [c00000000003b2b8] interrupt_exit_kernel_prepare+0x1e8/0x240
>> [ 883.903240] [c000000001c97a30] [c00000000000fd88] interrupt_return+0x158/0x200
>> [ 883.903276] --- interrupt: ea0 at arch_local_irq_restore+0x70/0xc0
>
> You got a 0xea0 interrupt in the ftrace code. I wonder where it is
> looping. Do you see more soft lockup messages?
We should probably fix this recursion too. I was vaguely aware of it and
thought it might have existed with the old interrupt exit and replay
code as well and was pretty well bounded, but I'm not entirely sure it's
okay. And now that I've thought about it a bit harder, I think there is
actualy a simple way to fix it -
[PATCH] powerpc/64: prevent replayed interrupt handlers from running
softirqs
Running softirqs enables interrupts, which can then end up recursing
into the irq soft-mask code we're trying to adjust, including replaying
interrupts itself which may not be bounded. This abridged trace shows
how this can occur:
NIP replay_soft_interrupts
LR interrupt_exit_kernel_prepare
Call Trace:
interrupt_exit_kernel_prepare (unreliable)
interrupt_return
--- interrupt: ea0 at __rb_reserve_next
NIP __rb_reserve_next
LR __rb_reserve_next
Call Trace:
ring_buffer_lock_reserve
trace_function
function_trace_call
ftrace_call
__do_softirq
irq_exit
timer_interrupt
replay_soft_interrupts
interrupt_exit_kernel_prepare
interrupt_return
--- interrupt: ea0 at arch_local_irq_restore
Fix this by disabling bhs (softirqs) around the interrupt replay.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/irq.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 681abb7c0507..bb0d4fc8df89 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -189,6 +189,18 @@ void replay_soft_interrupts(void)
unsigned char happened = local_paca->irq_happened;
struct pt_regs regs;
+ /*
+ * Prevent softirqs from being run when an interrupt handler returns
+ * and calls irq_exit(), because softirq processing enables interrupts.
+ * If an interrupt is taken, it may then call replay_soft_interrupts
+ * on its way out, which gets messy and recursive.
+ *
+ * softirqs created by replayed interrupts will be run at the end of
+ * this function when bhs are enabled (if they were enabled in our
+ * caller).
+ */
+ local_bh_disable();
+
ppc_save_regs(®s);
regs.softe = IRQS_ENABLED;
@@ -264,6 +276,8 @@ void replay_soft_interrupts(void)
trace_hardirqs_off();
goto again;
}
+
+ local_bh_enable();
}
notrace void arch_local_irq_restore(unsigned long mask)
--
2.23.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* Re: [PATCH v6 25/39] powerpc: convert interrupt handlers to use wrappers
2021-01-15 16:49 ` [PATCH v6 25/39] powerpc: convert interrupt handlers to use wrappers Nicholas Piggin
@ 2021-01-20 10:48 ` kernel test robot
2021-01-20 11:45 ` kernel test robot
1 sibling, 0 replies; 56+ messages in thread
From: kernel test robot @ 2021-01-20 10:48 UTC (permalink / raw)
To: Nicholas Piggin, linuxppc-dev; +Cc: kbuild-all, Nicholas Piggin
[-- Attachment #1: Type: text/plain, Size: 4075 bytes --]
Hi Nicholas,
I love your patch! Yet something to improve:
[auto build test ERROR on powerpc/next]
[also build test ERROR on v5.11-rc4 next-20210120]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Nicholas-Piggin/powerpc-interrupt-wrappers/20210116-023244
base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-skiroot_defconfig (attached as .config)
compiler: powerpc64le-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/04d5131f1545e1e992962a5339135b605eb421a5
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Nicholas-Piggin/powerpc-interrupt-wrappers/20210116-023244
git checkout 04d5131f1545e1e992962a5339135b605eb421a5
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
In file included from arch/powerpc/mm/book3s64/hash_utils.c:41:
>> arch/powerpc/mm/book3s64/hash_utils.c:1516:30: error: no previous prototype for '__do_hash_fault' [-Werror=missing-prototypes]
1516 | DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
| ^~~~~~~~~~~~~~~
arch/powerpc/include/asm/interrupt.h:150:24: note: in definition of macro 'DEFINE_INTERRUPT_HANDLER_RET'
150 | __visible noinstr long func(struct pt_regs *regs) \
| ^~~~
arch/powerpc/mm/book3s64/hash_utils.c:1905:6: error: no previous prototype for 'hpte_insert_repeating' [-Werror=missing-prototypes]
1905 | long hpte_insert_repeating(unsigned long hash, unsigned long vpn,
| ^~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
vim +/__do_hash_fault +1516 arch/powerpc/mm/book3s64/hash_utils.c
1515
> 1516 DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
1517 {
1518 unsigned long ea = regs->dar;
1519 unsigned long dsisr = regs->dsisr;
1520 unsigned long access = _PAGE_PRESENT | _PAGE_READ;
1521 unsigned long flags = 0;
1522 struct mm_struct *mm;
1523 unsigned int region_id;
1524 long err;
1525
1526 region_id = get_region_id(ea);
1527 if ((region_id == VMALLOC_REGION_ID) || (region_id == IO_REGION_ID))
1528 mm = &init_mm;
1529 else
1530 mm = current->mm;
1531
1532 if (dsisr & DSISR_NOHPTE)
1533 flags |= HPTE_NOHPTE_UPDATE;
1534
1535 if (dsisr & DSISR_ISSTORE)
1536 access |= _PAGE_WRITE;
1537 /*
1538 * We set _PAGE_PRIVILEGED only when
1539 * kernel mode access kernel space.
1540 *
1541 * _PAGE_PRIVILEGED is NOT set
1542 * 1) when kernel mode access user space
1543 * 2) user space access kernel space.
1544 */
1545 access |= _PAGE_PRIVILEGED;
1546 if (user_mode(regs) || (region_id == USER_REGION_ID))
1547 access &= ~_PAGE_PRIVILEGED;
1548
1549 if (regs->trap == 0x400)
1550 access |= _PAGE_EXEC;
1551
1552 err = hash_page_mm(mm, ea, access, regs->trap, flags);
1553 if (unlikely(err < 0)) {
1554 // failed to instert a hash PTE due to an hypervisor error
1555 if (user_mode(regs)) {
1556 if (IS_ENABLED(CONFIG_PPC_SUBPAGE_PROT) && err == -2)
1557 _exception(SIGSEGV, regs, SEGV_ACCERR, ea);
1558 else
1559 _exception(SIGBUS, regs, BUS_ADRERR, ea);
1560 } else {
1561 bad_page_fault(regs, SIGBUS);
1562 }
1563 err = 0;
1564 }
1565
1566 return err;
1567 }
1568
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 21496 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v6 25/39] powerpc: convert interrupt handlers to use wrappers
2021-01-15 16:49 ` [PATCH v6 25/39] powerpc: convert interrupt handlers to use wrappers Nicholas Piggin
2021-01-20 10:48 ` kernel test robot
@ 2021-01-20 11:45 ` kernel test robot
1 sibling, 0 replies; 56+ messages in thread
From: kernel test robot @ 2021-01-20 11:45 UTC (permalink / raw)
To: Nicholas Piggin, linuxppc-dev
Cc: clang-built-linux, kbuild-all, Nicholas Piggin
[-- Attachment #1: Type: text/plain, Size: 4556 bytes --]
Hi Nicholas,
I love your patch! Perhaps something to improve:
[auto build test WARNING on powerpc/next]
[also build test WARNING on v5.11-rc4 next-20210120]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Nicholas-Piggin/powerpc-interrupt-wrappers/20210116-023244
base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-randconfig-r035-20210120 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project 22b68440e1647e16b5ee24b924986207173c02d1)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install powerpc cross compiling tool for clang build
# apt-get install binutils-powerpc-linux-gnu
# https://github.com/0day-ci/linux/commit/04d5131f1545e1e992962a5339135b605eb421a5
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Nicholas-Piggin/powerpc-interrupt-wrappers/20210116-023244
git checkout 04d5131f1545e1e992962a5339135b605eb421a5
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=powerpc
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
>> arch/powerpc/mm/book3s64/hash_utils.c:1516:30: warning: no previous prototype for function '__do_hash_fault' [-Wmissing-prototypes]
DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
^
arch/powerpc/mm/book3s64/hash_utils.c:1516:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
^
arch/powerpc/include/asm/interrupt.h:150:19: note: expanded from macro 'DEFINE_INTERRUPT_HANDLER_RET'
__visible noinstr long func(struct pt_regs *regs) \
^
arch/powerpc/mm/book3s64/hash_utils.c:1905:6: warning: no previous prototype for function 'hpte_insert_repeating' [-Wmissing-prototypes]
long hpte_insert_repeating(unsigned long hash, unsigned long vpn,
^
arch/powerpc/mm/book3s64/hash_utils.c:1905:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
long hpte_insert_repeating(unsigned long hash, unsigned long vpn,
^
static
2 warnings generated.
vim +/__do_hash_fault +1516 arch/powerpc/mm/book3s64/hash_utils.c
1515
> 1516 DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
1517 {
1518 unsigned long ea = regs->dar;
1519 unsigned long dsisr = regs->dsisr;
1520 unsigned long access = _PAGE_PRESENT | _PAGE_READ;
1521 unsigned long flags = 0;
1522 struct mm_struct *mm;
1523 unsigned int region_id;
1524 long err;
1525
1526 region_id = get_region_id(ea);
1527 if ((region_id == VMALLOC_REGION_ID) || (region_id == IO_REGION_ID))
1528 mm = &init_mm;
1529 else
1530 mm = current->mm;
1531
1532 if (dsisr & DSISR_NOHPTE)
1533 flags |= HPTE_NOHPTE_UPDATE;
1534
1535 if (dsisr & DSISR_ISSTORE)
1536 access |= _PAGE_WRITE;
1537 /*
1538 * We set _PAGE_PRIVILEGED only when
1539 * kernel mode access kernel space.
1540 *
1541 * _PAGE_PRIVILEGED is NOT set
1542 * 1) when kernel mode access user space
1543 * 2) user space access kernel space.
1544 */
1545 access |= _PAGE_PRIVILEGED;
1546 if (user_mode(regs) || (region_id == USER_REGION_ID))
1547 access &= ~_PAGE_PRIVILEGED;
1548
1549 if (regs->trap == 0x400)
1550 access |= _PAGE_EXEC;
1551
1552 err = hash_page_mm(mm, ea, access, regs->trap, flags);
1553 if (unlikely(err < 0)) {
1554 // failed to instert a hash PTE due to an hypervisor error
1555 if (user_mode(regs)) {
1556 if (IS_ENABLED(CONFIG_PPC_SUBPAGE_PROT) && err == -2)
1557 _exception(SIGSEGV, regs, SEGV_ACCERR, ea);
1558 else
1559 _exception(SIGBUS, regs, BUS_ADRERR, ea);
1560 } else {
1561 bad_page_fault(regs, SIGBUS);
1562 }
1563 err = 0;
1564 }
1565
1566 return err;
1567 }
1568
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 33697 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v6 14/39] powerpc/perf: move perf irq/nmi handling details into traps.c
2021-01-20 3:09 ` Nicholas Piggin
2021-01-20 4:21 ` Nicholas Piggin
@ 2021-01-27 5:49 ` Athira Rajeev
1 sibling, 0 replies; 56+ messages in thread
From: Athira Rajeev @ 2021-01-27 5:49 UTC (permalink / raw)
To: Nicholas Piggin; +Cc: linuxppc-dev
[-- Attachment #1: Type: text/html, Size: 18752 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v6 05/39] powerpc: remove arguments from fault handler functions
2021-01-15 16:49 ` [PATCH v6 05/39] powerpc: remove arguments from fault handler functions Nicholas Piggin
@ 2021-01-27 6:38 ` Christophe Leroy
2021-01-28 0:05 ` Nicholas Piggin
0 siblings, 1 reply; 56+ messages in thread
From: Christophe Leroy @ 2021-01-27 6:38 UTC (permalink / raw)
To: Nicholas Piggin, linuxppc-dev
Le 15/01/2021 à 17:49, Nicholas Piggin a écrit :
> Make mm fault handlers all just take the pt_regs * argument and load
> DAR/DSISR from that. Make those that return a value return long.
>
> This is done to make the function signatures match other handlers, which
> will help with a future patch to add wrappers. Explicit arguments could
> be added for performance but that would require more wrapper macro
> variants.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> arch/powerpc/include/asm/asm-prototypes.h | 4 ++--
> arch/powerpc/include/asm/book3s/64/mmu-hash.h | 2 +-
> arch/powerpc/include/asm/bug.h | 2 +-
> arch/powerpc/kernel/entry_32.S | 7 +------
> arch/powerpc/kernel/exceptions-64e.S | 2 --
> arch/powerpc/kernel/exceptions-64s.S | 17 ++++-------------
> arch/powerpc/kernel/head_40x.S | 10 +++++-----
> arch/powerpc/kernel/head_8xx.S | 6 +++---
> arch/powerpc/kernel/head_book3s_32.S | 5 ++---
> arch/powerpc/kernel/head_booke.h | 4 +---
> arch/powerpc/mm/book3s64/hash_utils.c | 8 +++++---
> arch/powerpc/mm/book3s64/slb.c | 11 +++++++----
> arch/powerpc/mm/fault.c | 5 ++---
> 13 files changed, 34 insertions(+), 49 deletions(-)
>
> diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
> index 238eacfda7b0..d6ea3f2d6cc0 100644
> --- a/arch/powerpc/kernel/entry_32.S
> +++ b/arch/powerpc/kernel/entry_32.S
> @@ -276,8 +276,7 @@ reenable_mmu:
> * We save a bunch of GPRs,
> * r3 can be different from GPR3(r1) at this point, r9 and r11
> * contains the old MSR and handler address respectively,
> - * r4 & r5 can contain page fault arguments that need to be passed
> - * along as well. r0, r6-r8, r12, CCR, CTR, XER etc... are left
> + * r0, r4-r8, r12, CCR, CTR, XER etc... are left
> * clobbered as they aren't useful past this point.
> */
>
> @@ -285,15 +284,11 @@ reenable_mmu:
> stw r9,8(r1)
> stw r11,12(r1)
> stw r3,16(r1)
As all functions only take 'regs' as input parameter, maybe we can avoid saving 'r3' by
recalculating it from r1 after the call with 'addi r3,r1,STACK_FRAME_OVERHEAD' ?
> - stw r4,20(r1)
> - stw r5,24(r1)
Patch 6 needs to go before this change. Probably the easiest would be to apply patch 6 before patch
5. Or this change needs to go after.
>
> /* If we are disabling interrupts (normal case), simply log it with
> * lockdep
> */
> 1: bl trace_hardirqs_off
> - lwz r5,24(r1)
> - lwz r4,20(r1)
> lwz r3,16(r1)
> lwz r11,12(r1)
> lwz r9,8(r1)
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v6 05/39] powerpc: remove arguments from fault handler functions
2021-01-27 6:38 ` Christophe Leroy
@ 2021-01-28 0:05 ` Nicholas Piggin
0 siblings, 0 replies; 56+ messages in thread
From: Nicholas Piggin @ 2021-01-28 0:05 UTC (permalink / raw)
To: Christophe Leroy, linuxppc-dev
Excerpts from Christophe Leroy's message of January 27, 2021 4:38 pm:
>
>
> Le 15/01/2021 à 17:49, Nicholas Piggin a écrit :
>> Make mm fault handlers all just take the pt_regs * argument and load
>> DAR/DSISR from that. Make those that return a value return long.
>>
>> This is done to make the function signatures match other handlers, which
>> will help with a future patch to add wrappers. Explicit arguments could
>> be added for performance but that would require more wrapper macro
>> variants.
>>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>> arch/powerpc/include/asm/asm-prototypes.h | 4 ++--
>> arch/powerpc/include/asm/book3s/64/mmu-hash.h | 2 +-
>> arch/powerpc/include/asm/bug.h | 2 +-
>> arch/powerpc/kernel/entry_32.S | 7 +------
>> arch/powerpc/kernel/exceptions-64e.S | 2 --
>> arch/powerpc/kernel/exceptions-64s.S | 17 ++++-------------
>> arch/powerpc/kernel/head_40x.S | 10 +++++-----
>> arch/powerpc/kernel/head_8xx.S | 6 +++---
>> arch/powerpc/kernel/head_book3s_32.S | 5 ++---
>> arch/powerpc/kernel/head_booke.h | 4 +---
>> arch/powerpc/mm/book3s64/hash_utils.c | 8 +++++---
>> arch/powerpc/mm/book3s64/slb.c | 11 +++++++----
>> arch/powerpc/mm/fault.c | 5 ++---
>> 13 files changed, 34 insertions(+), 49 deletions(-)
>>
>
>> diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
>> index 238eacfda7b0..d6ea3f2d6cc0 100644
>> --- a/arch/powerpc/kernel/entry_32.S
>> +++ b/arch/powerpc/kernel/entry_32.S
>> @@ -276,8 +276,7 @@ reenable_mmu:
>> * We save a bunch of GPRs,
>> * r3 can be different from GPR3(r1) at this point, r9 and r11
>> * contains the old MSR and handler address respectively,
>> - * r4 & r5 can contain page fault arguments that need to be passed
>> - * along as well. r0, r6-r8, r12, CCR, CTR, XER etc... are left
>> + * r0, r4-r8, r12, CCR, CTR, XER etc... are left
>> * clobbered as they aren't useful past this point.
>> */
>>
>> @@ -285,15 +284,11 @@ reenable_mmu:
>> stw r9,8(r1)
>> stw r11,12(r1)
>> stw r3,16(r1)
>
> As all functions only take 'regs' as input parameter, maybe we can avoid saving 'r3' by
> recalculating it from r1 after the call with 'addi r3,r1,STACK_FRAME_OVERHEAD' ?
It seems like it. All functions have regs as first parameter already
don't they? So this change could be done before this patch as well.
>
>> - stw r4,20(r1)
>> - stw r5,24(r1)
>
> Patch 6 needs to go before this change. Probably the easiest would be to apply patch 6 before patch
> 5. Or this change needs to go after.
Hmm okay thanks for finding that.
Thanks,
Nick
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v6 08/39] powerpc: rearrange do_page_fault error case to be inside exception_enter
2021-01-15 16:49 ` [PATCH v6 08/39] powerpc: rearrange do_page_fault error case to be inside exception_enter Nicholas Piggin
@ 2021-01-28 9:25 ` Christophe Leroy
0 siblings, 0 replies; 56+ messages in thread
From: Christophe Leroy @ 2021-01-28 9:25 UTC (permalink / raw)
To: Nicholas Piggin, linuxppc-dev
Le 15/01/2021 à 17:49, Nicholas Piggin a écrit :
> This keeps the context tracking over the entire interrupt handler which
> helps later with moving context tracking into interrupt wrappers.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> arch/powerpc/mm/fault.c | 28 ++++++++++++++++------------
> 1 file changed, 16 insertions(+), 12 deletions(-)
>
> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> index e476d7701413..e4121fd9fcf1 100644
> --- a/arch/powerpc/mm/fault.c
> +++ b/arch/powerpc/mm/fault.c
> @@ -544,20 +544,24 @@ NOKPROBE_SYMBOL(__do_page_fault);
>
> long do_page_fault(struct pt_regs *regs)
> {
> - const struct exception_table_entry *entry;
> - enum ctx_state prev_state = exception_enter();
> - int rc = __do_page_fault(regs, regs->dar, regs->dsisr);
> - exception_exit(prev_state);
> - if (likely(!rc))
> - return 0;
> -
> - entry = search_exception_tables(regs->nip);
> - if (unlikely(!entry))
> - return rc;
Could we keep this layout with using a 'goto' to the end of the function, instead of pushing error
handling to the right ?
Because at the end of the series once all context tracking is gone into helpers, the result looks
unfriendly.
It would look cleaner as:
static long __do_page_fault(struct pt_regs *regs)
{
long err;
const struct exception_table_entry *entry;
err = ___do_page_fault(regs, regs->dar, regs->dsisr);
if (likely(!err))
return 0;
entry = search_exception_tables(regs->nip);
if (likely(entry)) {
instruction_pointer_set(regs, extable_fixup(entry));
return 0;
} else if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64)) {
/* 32 and 64e handle this in asm */
return err;
}
__bad_page_fault(regs, err);
return 0;
}
NOKPROBE_SYMBOL(__do_page_fault);
> + enum ctx_state prev_state;
> + long err;
> +
> + prev_state = exception_enter();
> + err = __do_page_fault(regs, regs->dar, regs->dsisr);
> + if (unlikely(err)) {
> + const struct exception_table_entry *entry;
> +
> + entry = search_exception_tables(regs->nip);
> + if (likely(entry)) {
> + instruction_pointer_set(regs, extable_fixup(entry));
> + err = 0;
> + }
> + }
>
> - instruction_pointer_set(regs, extable_fixup(entry));
> + exception_exit(prev_state);
>
> - return 0;
> + return err;
> }
> NOKPROBE_SYMBOL(do_page_fault);
>
>
^ permalink raw reply [flat|nested] 56+ messages in thread
end of thread, other threads:[~2021-01-28 9:31 UTC | newest]
Thread overview: 56+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-15 16:49 [PATCH v6 00/39] powerpc: interrupt wrappers Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 01/39] KVM: PPC: Book3S HV: Context tracking exit guest context before enabling irqs Nicholas Piggin
2021-01-16 10:38 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 02/39] powerpc/32s: move DABR match out of handle_page_fault Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 03/39] powerpc/64s: " Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 04/39] powerpc/64s: move the hash fault handling logic to C Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 05/39] powerpc: remove arguments from fault handler functions Nicholas Piggin
2021-01-27 6:38 ` Christophe Leroy
2021-01-28 0:05 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 06/39] powerpc: do_break get registers from regs Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 07/39] powerpc: bad_page_fault " Nicholas Piggin
2021-01-15 17:09 ` Christophe Leroy
2021-01-16 0:42 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 08/39] powerpc: rearrange do_page_fault error case to be inside exception_enter Nicholas Piggin
2021-01-28 9:25 ` Christophe Leroy
2021-01-15 16:49 ` [PATCH v6 09/39] powerpc/64s: move bad_page_fault handling to C Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 10/39] powerpc/64s: split do_hash_fault Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 11/39] powerpc/mm: Remove stale do_page_fault comment referring to SLB faults Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 12/39] powerpc/64s: slb comment update Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 13/39] powerpc/traps: add NOKPROBE_SYMBOL for sreset and mce Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 14/39] powerpc/perf: move perf irq/nmi handling details into traps.c Nicholas Piggin
2021-01-19 10:24 ` Athira Rajeev
2021-01-20 3:09 ` Nicholas Piggin
2021-01-20 4:21 ` Nicholas Piggin
2021-01-27 5:49 ` Athira Rajeev
2021-01-15 16:49 ` [PATCH v6 15/39] powerpc/time: move timer_broadcast_interrupt prototype to asm/time.h Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 16/39] powerpc: add and use unknown_async_exception Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 17/39] powerpc/fsl_booke/32: CacheLockingException remove args Nicholas Piggin
2021-01-15 17:14 ` Christophe Leroy
2021-01-16 0:43 ` Nicholas Piggin
2021-01-16 7:38 ` Christophe Leroy
2021-01-16 10:34 ` Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 18/39] powerpc: DebugException " Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 19/39] powerpc/cell: tidy up pervasive declarations Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 20/39] powerpc: introduce die_mce Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 21/39] powerpc/mce: ensure machine check handler always tests RI Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 22/39] powerpc: improve handling of unrecoverable system reset Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 23/39] powerpc: interrupt handler wrapper functions Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 24/39] powerpc: add interrupt wrapper entry / exit stub functions Nicholas Piggin
2021-01-15 16:49 ` [PATCH v6 25/39] powerpc: convert interrupt handlers to use wrappers Nicholas Piggin
2021-01-20 10:48 ` kernel test robot
2021-01-20 11:45 ` kernel test robot
2021-01-15 16:49 ` [PATCH v6 26/39] powerpc: add interrupt_cond_local_irq_enable helper Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 27/39] powerpc/64: context tracking remove _TIF_NOHZ Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 28/39] powerpc/64s/hash: improve context tracking of hash faults Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 29/39] powerpc/64: context tracking move to interrupt wrappers Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 30/39] powerpc/64: add context tracking to asynchronous interrupts Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 31/39] powerpc: handle irq_enter/irq_exit in interrupt handler wrappers Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 32/39] powerpc/64s: move context tracking exit to interrupt exit path Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 33/39] powerpc/64s: reconcile interrupts in C Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 34/39] powerpc/64: move account_stolen_time into its own function Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 35/39] powerpc/64: entry cpu time accounting in C Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 36/39] powerpc: move NMI entry/exit code into wrapper Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 37/39] powerpc/64s: move NMI soft-mask handling to C Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 38/39] powerpc/64s: runlatch interrupt handling in C Nicholas Piggin
2021-01-15 16:50 ` [PATCH v6 39/39] powerpc/64s: power4 nap fixup " Nicholas Piggin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).