* [PATCH v2 0/5] powerpc/64s: improve boot debugging
@ 2022-09-26 5:56 Nicholas Piggin
2022-09-26 5:56 ` [PATCH v2 1/5] powerpc/64s/interrupt: move early boot ILE fixup into a macro Nicholas Piggin
` (5 more replies)
0 siblings, 6 replies; 7+ messages in thread
From: Nicholas Piggin @ 2022-09-26 5:56 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
This series provides a machine check handler to catch out of
bounds memory accesses in early boot before the MMU is enabled.
Since v1:
- 64e compile fix
Nicholas Piggin (5):
powerpc/64s/interrupt: move early boot ILE fixup into a macro
powerpc/64s: early boot machine check handler
powerpc/64: avoid using r13 in relocate
powerpc/64: don't set boot CPU's r13 to paca until the structure is
set up
powerpc/64s/interrupt: halt early boot interrupts if paca is not set
up
arch/powerpc/include/asm/asm-prototypes.h | 1 +
arch/powerpc/kernel/exceptions-64s.S | 117 +++++++++++++---------
arch/powerpc/kernel/head_64.S | 3 +
arch/powerpc/kernel/reloc_64.S | 14 +--
arch/powerpc/kernel/setup_64.c | 33 ++++--
arch/powerpc/kernel/traps.c | 14 +++
6 files changed, 120 insertions(+), 62 deletions(-)
--
2.37.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 1/5] powerpc/64s/interrupt: move early boot ILE fixup into a macro
2022-09-26 5:56 [PATCH v2 0/5] powerpc/64s: improve boot debugging Nicholas Piggin
@ 2022-09-26 5:56 ` Nicholas Piggin
2022-09-26 5:56 ` [PATCH v2 2/5] powerpc/64s: early boot machine check handler Nicholas Piggin
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Nicholas Piggin @ 2022-09-26 5:56 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
In preparation for using this sequence in machine check interrupt, move
it into a macro, with a small change to make it position independent.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/exceptions-64s.S | 100 +++++++++++++++------------
1 file changed, 55 insertions(+), 45 deletions(-)
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index dafa275f18bc..66e2adf50745 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -702,6 +702,60 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
ld r1,GPR1(r1)
.endm
+/*
+ * EARLY_BOOT_FIXUP - Fix real-mode interrupt with wrong endian in early boot.
+ *
+ * There's a short window during boot where although the kernel is running
+ * little endian, any exceptions will cause the CPU to switch back to big
+ * endian. For example a WARN() boils down to a trap instruction, which will
+ * cause a program check, and we end up here but with the CPU in big endian
+ * mode. The first instruction of the program check handler (in GEN_INT_ENTRY
+ * below) is an mtsprg, which when executed in the wrong endian is an lhzu with
+ * a ~3GB displacement from r3. The content of r3 is random, so that is a load
+ * from some random location, and depending on the system can easily lead to a
+ * checkstop, or an infinitely recursive page fault.
+ *
+ * So to handle that case we have a trampoline here that can detect we are in
+ * the wrong endian and flip us back to the correct endian. We can't flip
+ * MSR[LE] using mtmsr, so we have to use rfid. That requires backing up SRR0/1
+ * as well as a GPR. To do that we use SPRG0/2/3, as SPRG1 is already used for
+ * the paca. SPRG3 is user readable, but this trampoline is only active very
+ * early in boot, and SPRG3 will be reinitialised in vdso_getcpu_init() before
+ * userspace starts.
+ */
+.macro EARLY_BOOT_FIXUP
+#ifdef CONFIG_CPU_LITTLE_ENDIAN
+BEGIN_FTR_SECTION
+ tdi 0,0,0x48 // Trap never, or in reverse endian: b . + 8
+ b 2f // Skip trampoline if endian is correct
+ .long 0xa643707d // mtsprg 0, r11 Backup r11
+ .long 0xa6027a7d // mfsrr0 r11
+ .long 0xa643727d // mtsprg 2, r11 Backup SRR0 in SPRG2
+ .long 0xa6027b7d // mfsrr1 r11
+ .long 0xa643737d // mtsprg 3, r11 Backup SRR1 in SPRG3
+ .long 0xa600607d // mfmsr r11
+ .long 0x01006b69 // xori r11, r11, 1 Invert MSR[LE]
+ .long 0xa6037b7d // mtsrr1 r11
+ /*
+ * This is 'li r11,1f' where 1f is the absolute address of that
+ * label, byteswapped into the SI field of the instruction.
+ */
+ .long 0x00006039 | \
+ ((ABS_ADDR(1f, real_vectors) & 0x00ff) << 24) | \
+ ((ABS_ADDR(1f, real_vectors) & 0xff00) << 8)
+ .long 0xa6037a7d // mtsrr0 r11
+ .long 0x2400004c // rfid
+1:
+ mfsprg r11, 3
+ mtsrr1 r11 // Restore SRR1
+ mfsprg r11, 2
+ mtsrr0 r11 // Restore SRR0
+ mfsprg r11, 0 // Restore r11
+2:
+END_FTR_SECTION(0, 1) // nop out after boot
+#endif
+.endm
+
/*
* There are a few constraints to be concerned with.
* - Real mode exceptions code/data must be located at their physical location.
@@ -1619,51 +1673,7 @@ INT_DEFINE_BEGIN(program_check)
INT_DEFINE_END(program_check)
EXC_REAL_BEGIN(program_check, 0x700, 0x100)
-
-#ifdef CONFIG_CPU_LITTLE_ENDIAN
- /*
- * There's a short window during boot where although the kernel is
- * running little endian, any exceptions will cause the CPU to switch
- * back to big endian. For example a WARN() boils down to a trap
- * instruction, which will cause a program check, and we end up here but
- * with the CPU in big endian mode. The first instruction of the program
- * check handler (in GEN_INT_ENTRY below) is an mtsprg, which when
- * executed in the wrong endian is an lhzu with a ~3GB displacement from
- * r3. The content of r3 is random, so that is a load from some random
- * location, and depending on the system can easily lead to a checkstop,
- * or an infinitely recursive page fault.
- *
- * So to handle that case we have a trampoline here that can detect we
- * are in the wrong endian and flip us back to the correct endian. We
- * can't flip MSR[LE] using mtmsr, so we have to use rfid. That requires
- * backing up SRR0/1 as well as a GPR. To do that we use SPRG0/2/3, as
- * SPRG1 is already used for the paca. SPRG3 is user readable, but this
- * trampoline is only active very early in boot, and SPRG3 will be
- * reinitialised in vdso_getcpu_init() before userspace starts.
- */
-BEGIN_FTR_SECTION
- tdi 0,0,0x48 // Trap never, or in reverse endian: b . + 8
- b 1f // Skip trampoline if endian is correct
- .long 0xa643707d // mtsprg 0, r11 Backup r11
- .long 0xa6027a7d // mfsrr0 r11
- .long 0xa643727d // mtsprg 2, r11 Backup SRR0 in SPRG2
- .long 0xa6027b7d // mfsrr1 r11
- .long 0xa643737d // mtsprg 3, r11 Backup SRR1 in SPRG3
- .long 0xa600607d // mfmsr r11
- .long 0x01006b69 // xori r11, r11, 1 Invert MSR[LE]
- .long 0xa6037b7d // mtsrr1 r11
- .long 0x34076039 // li r11, 0x734
- .long 0xa6037a7d // mtsrr0 r11
- .long 0x2400004c // rfid
- mfsprg r11, 3
- mtsrr1 r11 // Restore SRR1
- mfsprg r11, 2
- mtsrr0 r11 // Restore SRR0
- mfsprg r11, 0 // Restore r11
-1:
-END_FTR_SECTION(0, 1) // nop out after boot
-#endif /* CONFIG_CPU_LITTLE_ENDIAN */
-
+ EARLY_BOOT_FIXUP
GEN_INT_ENTRY program_check, virt=0
EXC_REAL_END(program_check, 0x700, 0x100)
EXC_VIRT_BEGIN(program_check, 0x4700, 0x100)
--
2.37.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 2/5] powerpc/64s: early boot machine check handler
2022-09-26 5:56 [PATCH v2 0/5] powerpc/64s: improve boot debugging Nicholas Piggin
2022-09-26 5:56 ` [PATCH v2 1/5] powerpc/64s/interrupt: move early boot ILE fixup into a macro Nicholas Piggin
@ 2022-09-26 5:56 ` Nicholas Piggin
2022-09-26 5:56 ` [PATCH v2 3/5] powerpc/64: avoid using r13 in relocate Nicholas Piggin
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Nicholas Piggin @ 2022-09-26 5:56 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
Use the early boot interrupt fixup in the machine check handler to allow
the machine check handler to run before interrupt endian is set up.
Branch to an early boot handler that just does a basic crash, which
allows it to run before ppc_md is set up. MSR[ME] is enabled on the boot
CPU earlier, and the machine check stack is temporarily set to the
middle of the init task stack.
This allows machine checks (e.g., due to invalid data access in real
mode) to print something useful earlier in boot (as soon as udbg is set
up, if CONFIG_PPC_EARLY_DEBUG=y).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/asm-prototypes.h | 1 +
arch/powerpc/kernel/exceptions-64s.S | 6 +++++-
arch/powerpc/kernel/setup_64.c | 14 ++++++++++++++
arch/powerpc/kernel/traps.c | 14 ++++++++++++++
4 files changed, 34 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
index 81631e64dbeb..a1039b9da42e 100644
--- a/arch/powerpc/include/asm/asm-prototypes.h
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -36,6 +36,7 @@ int64_t __opal_call(int64_t a0, int64_t a1, int64_t a2, int64_t a3,
int64_t opcode, uint64_t msr);
/* misc runtime */
+void enable_machine_check(void);
extern u64 __bswapdi2(u64);
extern s64 __lshrdi3(s64, int);
extern s64 __ashldi3(s64, int);
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 66e2adf50745..9b853fdd59de 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1133,6 +1133,7 @@ INT_DEFINE_BEGIN(machine_check)
INT_DEFINE_END(machine_check)
EXC_REAL_BEGIN(machine_check, 0x200, 0x100)
+ EARLY_BOOT_FIXUP
GEN_INT_ENTRY machine_check_early, virt=0
EXC_REAL_END(machine_check, 0x200, 0x100)
EXC_VIRT_NONE(0x4200, 0x100)
@@ -1197,6 +1198,9 @@ BEGIN_FTR_SECTION
bl enable_machine_check
END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
addi r3,r1,STACK_FRAME_OVERHEAD
+BEGIN_FTR_SECTION
+ bl machine_check_early_boot
+END_FTR_SECTION(0, 1) // nop out after boot
bl machine_check_early
std r3,RESULT(r1) /* Save result */
ld r12,_MSR(r1)
@@ -3095,7 +3099,7 @@ CLOSE_FIXED_SECTION(virt_trampolines);
USE_TEXT_SECTION()
/* MSR[RI] should be clear because this uses SRR[01] */
-enable_machine_check:
+_GLOBAL(enable_machine_check)
mflr r0
bcl 20,31,$+4
0: mflr r3
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index ce8fc6575eaa..e68d316b993e 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -34,6 +34,7 @@
#include <linux/of.h>
#include <linux/of_fdt.h>
+#include <asm/asm-prototypes.h>
#include <asm/kvm_guest.h>
#include <asm/io.h>
#include <asm/kdump.h>
@@ -180,6 +181,16 @@ static void __init fixup_boot_paca(void)
{
/* The boot cpu is started */
get_paca()->cpu_start = 1;
+#ifdef CONFIG_PPC_BOOK3S_64
+ /*
+ * Give the early boot machine check stack somewhere to use, use
+ * half of the init stack. This is a bit hacky but there should not be
+ * deep stack usage in early init so shouldn't overflow it or overwrite
+ * things.
+ */
+ get_paca()->mc_emergency_sp = (void *)&init_thread_union +
+ (THREAD_SIZE/2);
+#endif
/* Allow percpu accesses to work until we setup percpu data */
get_paca()->data_offset = 0;
/* Mark interrupts soft and hard disabled in PACA */
@@ -357,6 +368,9 @@ void __init early_setup(unsigned long dt_ptr)
/* -------- printk is now safe to use ------- */
+ if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && (mfmsr() & MSR_HV))
+ enable_machine_check();
+
/* Try new device tree based feature discovery ... */
if (!dt_cpu_ftrs_init(__va(dt_ptr)))
/* Otherwise use the old style CPU table */
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index dadfcef5d6db..37f8375452ad 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -68,6 +68,7 @@
#include <asm/stacktrace.h>
#include <asm/nmi.h>
#include <asm/disassemble.h>
+#include <asm/udbg.h>
#if defined(CONFIG_DEBUGGER) || defined(CONFIG_KEXEC_CORE)
int (*__debugger)(struct pt_regs *regs) __read_mostly;
@@ -850,6 +851,19 @@ static void __machine_check_exception(struct pt_regs *regs)
}
#ifdef CONFIG_PPC_BOOK3S_64
+DEFINE_INTERRUPT_HANDLER_RAW(machine_check_early_boot)
+{
+ udbg_printf("Machine check (early boot)\n");
+ udbg_printf("SRR0=0x%016lx SRR1=0x%016lx\n", regs->nip, regs->msr);
+ udbg_printf(" DAR=0x%016lx DSISR=0x%08lx\n", regs->dar, regs->dsisr);
+ udbg_printf(" LR=0x%016lx R1=0x%08lx\n", regs->link, regs->gpr[1]);
+ udbg_printf("------\n");
+ die("Machine check (early boot)", regs, SIGBUS);
+ for (;;)
+ ;
+ return 0;
+}
+
DEFINE_INTERRUPT_HANDLER_ASYNC(machine_check_exception_async)
{
__machine_check_exception(regs);
--
2.37.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 3/5] powerpc/64: avoid using r13 in relocate
2022-09-26 5:56 [PATCH v2 0/5] powerpc/64s: improve boot debugging Nicholas Piggin
2022-09-26 5:56 ` [PATCH v2 1/5] powerpc/64s/interrupt: move early boot ILE fixup into a macro Nicholas Piggin
2022-09-26 5:56 ` [PATCH v2 2/5] powerpc/64s: early boot machine check handler Nicholas Piggin
@ 2022-09-26 5:56 ` Nicholas Piggin
2022-09-26 5:56 ` [PATCH v2 4/5] powerpc/64: don't set boot CPU's r13 to paca until the structure is set up Nicholas Piggin
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Nicholas Piggin @ 2022-09-26 5:56 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
relocate() uses r13 in early boot before it is used for the paca. Use
a different register for this so r13 is kept unchanged until it is
set to the paca pointer.
Avoid r14 as well while we're here, there's no reason not to use the
volatile registers which is a bit less surprising, and r14 could be used
as another fixed reg one day.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/reloc_64.S | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/kernel/reloc_64.S b/arch/powerpc/kernel/reloc_64.S
index 232e4549defe..efd52f2e7033 100644
--- a/arch/powerpc/kernel/reloc_64.S
+++ b/arch/powerpc/kernel/reloc_64.S
@@ -27,8 +27,8 @@ _GLOBAL(relocate)
add r9,r9,r12 /* r9 has runtime addr of .rela.dyn section */
ld r10,(p_st - 0b)(r12)
add r10,r10,r12 /* r10 has runtime addr of _stext */
- ld r13,(p_sym - 0b)(r12)
- add r13,r13,r12 /* r13 has runtime addr of .dynsym */
+ ld r4,(p_sym - 0b)(r12)
+ add r4,r4,r12 /* r4 has runtime addr of .dynsym */
/*
* Scan the dynamic section for the RELA, RELASZ and RELAENT entries.
@@ -84,16 +84,16 @@ _GLOBAL(relocate)
ld r0,16(r9) /* reloc->r_addend */
b .Lstore
.Luaddr64:
- srdi r14,r0,32 /* ELF64_R_SYM(reloc->r_info) */
+ srdi r5,r0,32 /* ELF64_R_SYM(reloc->r_info) */
clrldi r0,r0,32
cmpdi r0,R_PPC64_UADDR64
bne .Lnext
ld r6,0(r9)
ld r0,16(r9)
- mulli r14,r14,24 /* 24 == sizeof(elf64_sym) */
- add r14,r14,r13 /* elf64_sym[ELF64_R_SYM] */
- ld r14,8(r14)
- add r0,r0,r14
+ mulli r5,r5,24 /* 24 == sizeof(elf64_sym) */
+ add r5,r5,r4 /* elf64_sym[ELF64_R_SYM] */
+ ld r5,8(r5)
+ add r0,r0,r5
.Lstore:
add r0,r0,r3
stdx r0,r7,r6
--
2.37.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 4/5] powerpc/64: don't set boot CPU's r13 to paca until the structure is set up
2022-09-26 5:56 [PATCH v2 0/5] powerpc/64s: improve boot debugging Nicholas Piggin
` (2 preceding siblings ...)
2022-09-26 5:56 ` [PATCH v2 3/5] powerpc/64: avoid using r13 in relocate Nicholas Piggin
@ 2022-09-26 5:56 ` Nicholas Piggin
2022-09-26 5:56 ` [PATCH v2 5/5] powerpc/64s/interrupt: halt early boot interrupts if paca is not " Nicholas Piggin
2022-10-04 13:25 ` [PATCH v2 0/5] powerpc/64s: improve boot debugging Michael Ellerman
5 siblings, 0 replies; 7+ messages in thread
From: Nicholas Piggin @ 2022-09-26 5:56 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
The idea is to get to the point where if r13 is non-zero, then it should
contain a reasonable paca. This can be used in early boot program check
and machine check handlers to avoid running off into the weeds if they
hit before r13 has a paca.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/setup_64.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index e68d316b993e..83e564564f63 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -177,10 +177,10 @@ early_param("smt-enabled", early_smt_enabled);
#endif /* CONFIG_SMP */
/** Fix up paca fields required for the boot cpu */
-static void __init fixup_boot_paca(void)
+static void __init fixup_boot_paca(struct paca_struct *boot_paca)
{
/* The boot cpu is started */
- get_paca()->cpu_start = 1;
+ boot_paca->cpu_start = 1;
#ifdef CONFIG_PPC_BOOK3S_64
/*
* Give the early boot machine check stack somewhere to use, use
@@ -188,14 +188,14 @@ static void __init fixup_boot_paca(void)
* deep stack usage in early init so shouldn't overflow it or overwrite
* things.
*/
- get_paca()->mc_emergency_sp = (void *)&init_thread_union +
+ boot_paca->mc_emergency_sp = (void *)&init_thread_union +
(THREAD_SIZE/2);
#endif
/* Allow percpu accesses to work until we setup percpu data */
- get_paca()->data_offset = 0;
+ boot_paca->data_offset = 0;
/* Mark interrupts soft and hard disabled in PACA */
- irq_soft_mask_set(IRQS_DISABLED);
- get_paca()->irq_happened = PACA_IRQ_HARD_DIS;
+ boot_paca->irq_soft_mask = IRQS_DISABLED;
+ boot_paca->irq_happened = PACA_IRQ_HARD_DIS;
WARN_ON(mfmsr() & MSR_EE);
}
@@ -363,8 +363,8 @@ void __init early_setup(unsigned long dt_ptr)
* what CPU we are on.
*/
initialise_paca(&boot_paca, 0);
- setup_paca(&boot_paca);
- fixup_boot_paca();
+ fixup_boot_paca(&boot_paca);
+ setup_paca(&boot_paca); /* install the paca into registers */
/* -------- printk is now safe to use ------- */
@@ -393,8 +393,8 @@ void __init early_setup(unsigned long dt_ptr)
/* Poison paca_ptrs[0] again if it's not the boot cpu */
memset(&paca_ptrs[0], 0x88, sizeof(paca_ptrs[0]));
}
- setup_paca(paca_ptrs[boot_cpuid]);
- fixup_boot_paca();
+ fixup_boot_paca(paca_ptrs[boot_cpuid]);
+ setup_paca(paca_ptrs[boot_cpuid]); /* install the paca into registers */
/*
* Configure exception handlers. This include setting up trampolines
--
2.37.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 5/5] powerpc/64s/interrupt: halt early boot interrupts if paca is not set up
2022-09-26 5:56 [PATCH v2 0/5] powerpc/64s: improve boot debugging Nicholas Piggin
` (3 preceding siblings ...)
2022-09-26 5:56 ` [PATCH v2 4/5] powerpc/64: don't set boot CPU's r13 to paca until the structure is set up Nicholas Piggin
@ 2022-09-26 5:56 ` Nicholas Piggin
2022-10-04 13:25 ` [PATCH v2 0/5] powerpc/64s: improve boot debugging Michael Ellerman
5 siblings, 0 replies; 7+ messages in thread
From: Nicholas Piggin @ 2022-09-26 5:56 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin
Ensure r13 is zero from very early in boot until it gets set to the
boot paca pointer. This allows early program and mce handlers to halt
if there is no valid paca, rather than potentially run off into the
weeds. This preserves register and memory contents for low level
debugging tools.
Nothing could be printed to console at this point in any case because
even udbg is only set up after the boot paca is set, so this shouldn't
be missed.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/exceptions-64s.S | 15 +++++++++++++--
arch/powerpc/kernel/head_64.S | 3 +++
arch/powerpc/kernel/setup_64.c | 1 +
3 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 9b853fdd59de..2f3b8d8a7ef6 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -724,8 +724,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
* userspace starts.
*/
.macro EARLY_BOOT_FIXUP
-#ifdef CONFIG_CPU_LITTLE_ENDIAN
BEGIN_FTR_SECTION
+#ifdef CONFIG_CPU_LITTLE_ENDIAN
tdi 0,0,0x48 // Trap never, or in reverse endian: b . + 8
b 2f // Skip trampoline if endian is correct
.long 0xa643707d // mtsprg 0, r11 Backup r11
@@ -752,8 +752,19 @@ BEGIN_FTR_SECTION
mtsrr0 r11 // Restore SRR0
mfsprg r11, 0 // Restore r11
2:
-END_FTR_SECTION(0, 1) // nop out after boot
#endif
+ /*
+ * program check could hit at any time, and pseries can not block
+ * MSR[ME] in early boot. So check if there is anything useful in r13
+ * yet, and spin forever if not.
+ */
+ mtsprg 0, r11
+ mfcr r11
+ cmpdi r13, 0
+ beq .
+ mtcr r11
+ mfsprg r11, 0
+END_FTR_SECTION(0, 1) // nop out after boot
.endm
/*
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index cf2c08902c05..6aeba8a9814e 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -494,6 +494,9 @@ __start_initialization_multiplatform:
/* Make sure we are running in 64 bits mode */
bl enable_64b_mode
+ /* Zero r13 (paca) so early program check / mce don't use it */
+ li r13,0
+
/* Get TOC pointer (current runtime address) */
bl relative_toc
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 83e564564f63..4cb057e6b3aa 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -364,6 +364,7 @@ void __init early_setup(unsigned long dt_ptr)
*/
initialise_paca(&boot_paca, 0);
fixup_boot_paca(&boot_paca);
+ WARN_ON(local_paca != 0);
setup_paca(&boot_paca); /* install the paca into registers */
/* -------- printk is now safe to use ------- */
--
2.37.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2 0/5] powerpc/64s: improve boot debugging
2022-09-26 5:56 [PATCH v2 0/5] powerpc/64s: improve boot debugging Nicholas Piggin
` (4 preceding siblings ...)
2022-09-26 5:56 ` [PATCH v2 5/5] powerpc/64s/interrupt: halt early boot interrupts if paca is not " Nicholas Piggin
@ 2022-10-04 13:25 ` Michael Ellerman
5 siblings, 0 replies; 7+ messages in thread
From: Michael Ellerman @ 2022-10-04 13:25 UTC (permalink / raw)
To: Nicholas Piggin, linuxppc-dev
On Mon, 26 Sep 2022 15:56:15 +1000, Nicholas Piggin wrote:
> This series provides a machine check handler to catch out of
> bounds memory accesses in early boot before the MMU is enabled.
>
> Since v1:
> - 64e compile fix
>
> Nicholas Piggin (5):
> powerpc/64s/interrupt: move early boot ILE fixup into a macro
> powerpc/64s: early boot machine check handler
> powerpc/64: avoid using r13 in relocate
> powerpc/64: don't set boot CPU's r13 to paca until the structure is
> set up
> powerpc/64s/interrupt: halt early boot interrupts if paca is not set
> up
>
> [...]
Applied to powerpc/next.
[1/5] powerpc/64s/interrupt: move early boot ILE fixup into a macro
https://git.kernel.org/powerpc/c/bf75a3258a40327b73c5b4458ae8102cfa921b40
[2/5] powerpc/64s: early boot machine check handler
https://git.kernel.org/powerpc/c/2f5182cffa43f31c241131a2c10a4ecd8e90fb3e
[3/5] powerpc/64: avoid using r13 in relocate
https://git.kernel.org/powerpc/c/b830c8754e046f96e84da9d3b3e028c4ceef2b18
[4/5] powerpc/64: don't set boot CPU's r13 to paca until the structure is set up
https://git.kernel.org/powerpc/c/519b2e317e39ac99ce589a7c8480c47a17d62638
[5/5] powerpc/64s/interrupt: halt early boot interrupts if paca is not set up
https://git.kernel.org/powerpc/c/e1100cee059ad0bea6a668177e835baa087a0c65
cheers
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2022-10-04 14:01 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-26 5:56 [PATCH v2 0/5] powerpc/64s: improve boot debugging Nicholas Piggin
2022-09-26 5:56 ` [PATCH v2 1/5] powerpc/64s/interrupt: move early boot ILE fixup into a macro Nicholas Piggin
2022-09-26 5:56 ` [PATCH v2 2/5] powerpc/64s: early boot machine check handler Nicholas Piggin
2022-09-26 5:56 ` [PATCH v2 3/5] powerpc/64: avoid using r13 in relocate Nicholas Piggin
2022-09-26 5:56 ` [PATCH v2 4/5] powerpc/64: don't set boot CPU's r13 to paca until the structure is set up Nicholas Piggin
2022-09-26 5:56 ` [PATCH v2 5/5] powerpc/64s/interrupt: halt early boot interrupts if paca is not " Nicholas Piggin
2022-10-04 13:25 ` [PATCH v2 0/5] powerpc/64s: improve boot debugging Michael Ellerman
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.