* RISCV: the machanism of available_harts may cause other harts boot failure @ 2022-09-05 6:22 Rick Chen 2022-09-05 7:47 ` Nikita Shubin 0 siblings, 1 reply; 10+ messages in thread From: Rick Chen @ 2022-09-05 6:22 UTC (permalink / raw) To: Lukas Auer Cc: U-Boot Mailing List, Heinrich Schuchardt, Atish Patra, Anup Patel, Bin Meng, Sean Anderson, Leo Liang, rick, nikita.shubin Hi, When I free-run a SMP system, I once hit a failure case where some harts didn't boot to the kernel shell successfully. However it can't be duplicated anymore even if I try many times. But when I set a break during debugging with GDB, it can trigger the failure case each time. I think the mechanism of available_harts does not provide a method that guarantees the success of the SMP system. Maybe we shall think of a better way for the SMP booting or just remove it ? Thread 8 hit Breakpoint 1, harts_early_init () (gdb) c Continuing. [Switching to Thread 7] Thread 7 hit Breakpoint 1, harts_early_init () (gdb) Continuing. [Switching to Thread 6] Thread 6 hit Breakpoint 1, harts_early_init () (gdb) Continuing. [Switching to Thread 5] Thread 5 hit Breakpoint 1, harts_early_init () (gdb) Continuing. [Switching to Thread 4] Thread 4 hit Breakpoint 1, harts_early_init () (gdb) Continuing. [Switching to Thread 3] Thread 3 hit Breakpoint 1, harts_early_init () (gdb) Continuing. [Switching to Thread 2] Thread 2 hit Breakpoint 1, harts_early_init () (gdb) Continuing. [Switching to Thread 1] Thread 1 hit Breakpoint 1, harts_early_init () (gdb) Continuing. [Switching to Thread 5] Thread 5 hit Breakpoint 3, 0x0000000001200000 in ?? () (gdb) info threads Id Target Id Frame 1 Thread 1 (hart 1) secondary_hart_loop () at arch/riscv/cpu/start.S:436 2 Thread 2 (hart 2) secondary_hart_loop () at arch/riscv/cpu/start.S:436 3 Thread 3 (hart 3) secondary_hart_loop () at arch/riscv/cpu/start.S:436 4 Thread 4 (hart 4) secondary_hart_loop () at arch/riscv/cpu/start.S:436 * 5 Thread 5 (hart 5) 0x0000000001200000 in ?? () 6 Thread 6 (hart 6) 0x000000000000b650 in ?? () 7 Thread 7 (hart 7) 0x000000000000b650 in ?? () 8 Thread 8 (hart 8) 0x0000000000005fa0 in ?? () (gdb) c Continuing. [ 0.175619] smp: Bringing up secondary CPUs ... [ 1.230474] CPU1: failed to come online [ 2.282349] CPU2: failed to come online [ 3.334394] CPU3: failed to come online [ 4.386783] CPU4: failed to come online [ 4.427829] smp: Brought up 1 node, 4 CPUs /root # cat /proc/cpuinfo processor : 0 hart : 4 isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 mmu : sv39 processor : 5 hart : 5 isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 mmu : sv39 processor : 6 hart : 6 isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 mmu : sv39 processor : 7 hart : 7 isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 mmu : sv39 /root # Thanks, Rick ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RISCV: the machanism of available_harts may cause other harts boot failure 2022-09-05 6:22 RISCV: the machanism of available_harts may cause other harts boot failure Rick Chen @ 2022-09-05 7:47 ` Nikita Shubin 2022-09-05 15:30 ` Sean Anderson 0 siblings, 1 reply; 10+ messages in thread From: Nikita Shubin @ 2022-09-05 7:47 UTC (permalink / raw) To: Rick Chen Cc: Lukas Auer, U-Boot Mailing List, Heinrich Schuchardt, Atish Patra, Anup Patel, Bin Meng, Sean Anderson, Leo Liang, rick Hi Rick! On Mon, 5 Sep 2022 14:22:41 +0800 Rick Chen <rickchen36@gmail.com> wrote: > Hi, > > When I free-run a SMP system, I once hit a failure case where some > harts didn't boot to the kernel shell successfully. > However it can't be duplicated anymore even if I try many times. > > But when I set a break during debugging with GDB, it can trigger the > failure case each time. If hart fails to register itself to available_harts before send_ipi_many is hit by the main hart: https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/lib/smp.c#L50 it won't exit the secondary_hart_loop: https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/cpu/start.S#L433 As no ipi will be sent to it. This might be exactly your case. > I think the mechanism of available_harts does not provide a method > that guarantees the success of the SMP system. > Maybe we shall think of a better way for the SMP booting or just > remove it ? I haven't experienced any unexplained problem with hart_lottery or available_harts_lock unless: 1) harts are started non-simultaneously 2) SPL/U-Boot is in some kind of TCM, OCRAM, etc... which is not cleared on reset which leaves available_harts dirty 3) something is wrong with atomics Also there might be something wrong with IPI send/recieve. > > Thread 8 hit Breakpoint 1, harts_early_init () > > (gdb) c > Continuing. > [Switching to Thread 7] > > Thread 7 hit Breakpoint 1, harts_early_init () > > (gdb) > Continuing. > [Switching to Thread 6] > > Thread 6 hit Breakpoint 1, harts_early_init () > > (gdb) > Continuing. > [Switching to Thread 5] > > Thread 5 hit Breakpoint 1, harts_early_init () > > (gdb) > Continuing. > [Switching to Thread 4] > > Thread 4 hit Breakpoint 1, harts_early_init () > > (gdb) > Continuing. > [Switching to Thread 3] > > Thread 3 hit Breakpoint 1, harts_early_init () > (gdb) > Continuing. > [Switching to Thread 2] > > Thread 2 hit Breakpoint 1, harts_early_init () > (gdb) > Continuing. > [Switching to Thread 1] > > Thread 1 hit Breakpoint 1, harts_early_init () > (gdb) > Continuing. > [Switching to Thread 5] > > > Thread 5 hit Breakpoint 3, 0x0000000001200000 in ?? () > (gdb) info threads > Id Target Id Frame > 1 Thread 1 (hart 1) secondary_hart_loop () at > arch/riscv/cpu/start.S:436 2 Thread 2 (hart 2) secondary_hart_loop > () at arch/riscv/cpu/start.S:436 3 Thread 3 (hart 3) > secondary_hart_loop () at arch/riscv/cpu/start.S:436 4 Thread 4 > (hart 4) secondary_hart_loop () at arch/riscv/cpu/start.S:436 > * 5 Thread 5 (hart 5) 0x0000000001200000 in ?? () > 6 Thread 6 (hart 6) 0x000000000000b650 in ?? () > 7 Thread 7 (hart 7) 0x000000000000b650 in ?? () > 8 Thread 8 (hart 8) 0x0000000000005fa0 in ?? () > (gdb) c > Continuing. Do they all "offline" harts remain in SPL/U-Boot secondary_hart_loop ? > > > > [ 0.175619] smp: Bringing up secondary CPUs ... > [ 1.230474] CPU1: failed to come online > [ 2.282349] CPU2: failed to come online > [ 3.334394] CPU3: failed to come online > [ 4.386783] CPU4: failed to come online > [ 4.427829] smp: Brought up 1 node, 4 CPUs > > > /root # cat /proc/cpuinfo > processor : 0 > hart : 4 > isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 > mmu : sv39 > > processor : 5 > hart : 5 > isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 > mmu : sv39 > > processor : 6 > hart : 6 > isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 > mmu : sv39 > > processor : 7 > hart : 7 > isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 > mmu : sv39 > > /root # > > Thanks, > Rick ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RISCV: the machanism of available_harts may cause other harts boot failure 2022-09-05 7:47 ` Nikita Shubin @ 2022-09-05 15:30 ` Sean Anderson 2022-09-05 15:41 ` Heinrich Schuchardt 2022-09-05 17:10 ` Nikita Shubin 0 siblings, 2 replies; 10+ messages in thread From: Sean Anderson @ 2022-09-05 15:30 UTC (permalink / raw) To: Nikita Shubin, Rick Chen Cc: Lukas Auer, U-Boot Mailing List, Heinrich Schuchardt, Atish Patra, Anup Patel, Bin Meng, Leo Liang, rick On 9/5/22 3:47 AM, Nikita Shubin wrote: > Hi Rick! > > On Mon, 5 Sep 2022 14:22:41 +0800 > Rick Chen <rickchen36@gmail.com> wrote: > >> Hi, >> >> When I free-run a SMP system, I once hit a failure case where some >> harts didn't boot to the kernel shell successfully. >> However it can't be duplicated anymore even if I try many times. >> >> But when I set a break during debugging with GDB, it can trigger the >> failure case each time. > > If hart fails to register itself to available_harts before > send_ipi_many is hit by the main hart: > https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/lib/smp.c#L50 > > it won't exit the secondary_hart_loop: > https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/cpu/start.S#L433 > As no ipi will be sent to it. > > This might be exactly your case. When working on the IPI mechanism, I considered this possibility. However, there's really no way to know how long to wait. On normal systems, the boot hart is going to do a lot of work before calling send_ipi_many, and the other harts just have to make it through ~100 instructions. So I figured we would never run into this issue. We might not even need the mask... the only direct reason we might is for OpenSBI, as spl_invoke_opensbi is the only function which uses the wait parameter. >> I think the mechanism of available_harts does not provide a method >> that guarantees the success of the SMP system. >> Maybe we shall think of a better way for the SMP booting or just >> remove it ? > > I haven't experienced any unexplained problem with hart_lottery or > available_harts_lock unless: > > 1) harts are started non-simultaneously > 2) SPL/U-Boot is in some kind of TCM, OCRAM, etc... which is not cleared > on reset which leaves available_harts dirty XIP, of course, has this problem every time and just doesn't use the mask. I remember thinking a lot about how to deal with this, but I never ended up sending a patch because I didn't have a XIP system. --Sean > 3) something is wrong with atomics > > Also there might be something wrong with IPI send/recieve. > >> >> Thread 8 hit Breakpoint 1, harts_early_init () >> >> (gdb) c >> Continuing. >> [Switching to Thread 7] >> >> Thread 7 hit Breakpoint 1, harts_early_init () >> >> (gdb) >> Continuing. >> [Switching to Thread 6] >> >> Thread 6 hit Breakpoint 1, harts_early_init () >> >> (gdb) >> Continuing. >> [Switching to Thread 5] >> >> Thread 5 hit Breakpoint 1, harts_early_init () >> >> (gdb) >> Continuing. >> [Switching to Thread 4] >> >> Thread 4 hit Breakpoint 1, harts_early_init () >> >> (gdb) >> Continuing. >> [Switching to Thread 3] >> >> Thread 3 hit Breakpoint 1, harts_early_init () >> (gdb) >> Continuing. >> [Switching to Thread 2] >> >> Thread 2 hit Breakpoint 1, harts_early_init () >> (gdb) >> Continuing. >> [Switching to Thread 1] >> >> Thread 1 hit Breakpoint 1, harts_early_init () >> (gdb) >> Continuing. >> [Switching to Thread 5] >> >> >> Thread 5 hit Breakpoint 3, 0x0000000001200000 in ?? () >> (gdb) info threads >> Id Target Id Frame >> 1 Thread 1 (hart 1) secondary_hart_loop () at >> arch/riscv/cpu/start.S:436 2 Thread 2 (hart 2) secondary_hart_loop >> () at arch/riscv/cpu/start.S:436 3 Thread 3 (hart 3) >> secondary_hart_loop () at arch/riscv/cpu/start.S:436 4 Thread 4 >> (hart 4) secondary_hart_loop () at arch/riscv/cpu/start.S:436 >> * 5 Thread 5 (hart 5) 0x0000000001200000 in ?? () >> 6 Thread 6 (hart 6) 0x000000000000b650 in ?? () >> 7 Thread 7 (hart 7) 0x000000000000b650 in ?? () >> 8 Thread 8 (hart 8) 0x0000000000005fa0 in ?? () >> (gdb) c >> Continuing. > > Do they all "offline" harts remain in SPL/U-Boot secondary_hart_loop ? > >> >> >> >> [ 0.175619] smp: Bringing up secondary CPUs ... >> [ 1.230474] CPU1: failed to come online >> [ 2.282349] CPU2: failed to come online >> [ 3.334394] CPU3: failed to come online >> [ 4.386783] CPU4: failed to come online >> [ 4.427829] smp: Brought up 1 node, 4 CPUs >> >> >> /root # cat /proc/cpuinfo >> processor : 0 >> hart : 4 >> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >> mmu : sv39 >> >> processor : 5 >> hart : 5 >> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >> mmu : sv39 >> >> processor : 6 >> hart : 6 >> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >> mmu : sv39 >> >> processor : 7 >> hart : 7 >> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >> mmu : sv39 >> >> /root # >> >> Thanks, >> Rick > ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RISCV: the machanism of available_harts may cause other harts boot failure 2022-09-05 15:30 ` Sean Anderson @ 2022-09-05 15:41 ` Heinrich Schuchardt 2022-09-05 15:45 ` Sean Anderson 2022-09-05 17:10 ` Nikita Shubin 1 sibling, 1 reply; 10+ messages in thread From: Heinrich Schuchardt @ 2022-09-05 15:41 UTC (permalink / raw) To: Sean Anderson Cc: Lukas Auer, U-Boot Mailing List, Atish Patra, Anup Patel, Bin Meng, Leo Liang, rick, Nikita Shubin, Rick Chen On 9/5/22 17:30, Sean Anderson wrote: > On 9/5/22 3:47 AM, Nikita Shubin wrote: >> Hi Rick! >> >> On Mon, 5 Sep 2022 14:22:41 +0800 >> Rick Chen <rickchen36@gmail.com> wrote: >> >>> Hi, >>> >>> When I free-run a SMP system, I once hit a failure case where some >>> harts didn't boot to the kernel shell successfully. >>> However it can't be duplicated anymore even if I try many times. >>> >>> But when I set a break during debugging with GDB, it can trigger the >>> failure case each time. >> >> If hart fails to register itself to available_harts before >> send_ipi_many is hit by the main hart: >> https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/lib/smp.c#L50 >> >> it won't exit the secondary_hart_loop: >> https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/cpu/start.S#L433 >> As no ipi will be sent to it. Can we call send_ipi_many() again when booting? Do we need to call it before booting? Best regards Heinrich >> >> This might be exactly your case. > > When working on the IPI mechanism, I considered this possibility. However, > there's really no way to know how long to wait. On normal systems, the boot > hart is going to do a lot of work before calling send_ipi_many, and the > other harts just have to make it through ~100 instructions. So I figured we > would never run into this issue. > > We might not even need the mask... the only direct reason we might is for > OpenSBI, as spl_invoke_opensbi is the only function which uses the wait > parameter. > >>> I think the mechanism of available_harts does not provide a method >>> that guarantees the success of the SMP system. >>> Maybe we shall think of a better way for the SMP booting or just >>> remove it ? >> >> I haven't experienced any unexplained problem with hart_lottery or >> available_harts_lock unless: >> >> 1) harts are started non-simultaneously >> 2) SPL/U-Boot is in some kind of TCM, OCRAM, etc... which is not cleared >> on reset which leaves available_harts dirty > > XIP, of course, has this problem every time and just doesn't use the mask. > I remember thinking a lot about how to deal with this, but I never ended > up sending a patch because I didn't have a XIP system. > > --Sean > >> 3) something is wrong with atomics >> >> Also there might be something wrong with IPI send/recieve. >> >>> >>> Thread 8 hit Breakpoint 1, harts_early_init () >>> >>> (gdb) c >>> Continuing. >>> [Switching to Thread 7] >>> >>> Thread 7 hit Breakpoint 1, harts_early_init () >>> >>> (gdb) >>> Continuing. >>> [Switching to Thread 6] >>> >>> Thread 6 hit Breakpoint 1, harts_early_init () >>> >>> (gdb) >>> Continuing. >>> [Switching to Thread 5] >>> >>> Thread 5 hit Breakpoint 1, harts_early_init () >>> >>> (gdb) >>> Continuing. >>> [Switching to Thread 4] >>> >>> Thread 4 hit Breakpoint 1, harts_early_init () >>> >>> (gdb) >>> Continuing. >>> [Switching to Thread 3] >>> >>> Thread 3 hit Breakpoint 1, harts_early_init () >>> (gdb) >>> Continuing. >>> [Switching to Thread 2] >>> >>> Thread 2 hit Breakpoint 1, harts_early_init () >>> (gdb) >>> Continuing. >>> [Switching to Thread 1] >>> >>> Thread 1 hit Breakpoint 1, harts_early_init () >>> (gdb) >>> Continuing. >>> [Switching to Thread 5] >>> >>> >>> Thread 5 hit Breakpoint 3, 0x0000000001200000 in ?? () >>> (gdb) info threads >>> Id Target Id Frame >>> 1 Thread 1 (hart 1) secondary_hart_loop () at >>> arch/riscv/cpu/start.S:436 2 Thread 2 (hart 2) secondary_hart_loop >>> () at arch/riscv/cpu/start.S:436 3 Thread 3 (hart 3) >>> secondary_hart_loop () at arch/riscv/cpu/start.S:436 4 Thread 4 >>> (hart 4) secondary_hart_loop () at arch/riscv/cpu/start.S:436 >>> * 5 Thread 5 (hart 5) 0x0000000001200000 in ?? () >>> 6 Thread 6 (hart 6) 0x000000000000b650 in ?? () >>> 7 Thread 7 (hart 7) 0x000000000000b650 in ?? () >>> 8 Thread 8 (hart 8) 0x0000000000005fa0 in ?? () >>> (gdb) c >>> Continuing. >> >> Do they all "offline" harts remain in SPL/U-Boot secondary_hart_loop ? >> >>> >>> >>> >>> [ 0.175619] smp: Bringing up secondary CPUs ... >>> [ 1.230474] CPU1: failed to come online >>> [ 2.282349] CPU2: failed to come online >>> [ 3.334394] CPU3: failed to come online >>> [ 4.386783] CPU4: failed to come online >>> [ 4.427829] smp: Brought up 1 node, 4 CPUs >>> >>> >>> /root # cat /proc/cpuinfo >>> processor : 0 >>> hart : 4 >>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>> mmu : sv39 >>> >>> processor : 5 >>> hart : 5 >>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>> mmu : sv39 >>> >>> processor : 6 >>> hart : 6 >>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>> mmu : sv39 >>> >>> processor : 7 >>> hart : 7 >>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>> mmu : sv39 >>> >>> /root # >>> >>> Thanks, >>> Rick >> > ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RISCV: the machanism of available_harts may cause other harts boot failure 2022-09-05 15:41 ` Heinrich Schuchardt @ 2022-09-05 15:45 ` Sean Anderson 2022-09-05 16:00 ` Heinrich Schuchardt 0 siblings, 1 reply; 10+ messages in thread From: Sean Anderson @ 2022-09-05 15:45 UTC (permalink / raw) To: Heinrich Schuchardt Cc: Lukas Auer, U-Boot Mailing List, Atish Patra, Anup Patel, Bin Meng, Leo Liang, rick, Nikita Shubin, Rick Chen On 9/5/22 11:41 AM, Heinrich Schuchardt wrote: > On 9/5/22 17:30, Sean Anderson wrote: >> On 9/5/22 3:47 AM, Nikita Shubin wrote: >>> Hi Rick! >>> >>> On Mon, 5 Sep 2022 14:22:41 +0800 >>> Rick Chen <rickchen36@gmail.com> wrote: >>> >>>> Hi, >>>> >>>> When I free-run a SMP system, I once hit a failure case where some >>>> harts didn't boot to the kernel shell successfully. >>>> However it can't be duplicated anymore even if I try many times. >>>> >>>> But when I set a break during debugging with GDB, it can trigger the >>>> failure case each time. >>> >>> If hart fails to register itself to available_harts before >>> send_ipi_many is hit by the main hart: >>> https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/lib/smp.c#L50 >>> >>> it won't exit the secondary_hart_loop: >>> https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/cpu/start.S#L433 >>> As no ipi will be sent to it. > > Can we call send_ipi_many() again when booting? AFAIK we do; see arch/riscv/lib/bootm.c and arch/riscv/lib/spl.c > Do we need to call it before booting? Yes. We also call it when relocating (in SPL and U-Boot proper). >>> >>> This might be exactly your case. >> >> When working on the IPI mechanism, I considered this possibility. However, >> there's really no way to know how long to wait. On normal systems, the boot >> hart is going to do a lot of work before calling send_ipi_many, and the >> other harts just have to make it through ~100 instructions. So I figured we >> would never run into this issue. >> >> We might not even need the mask... the only direct reason we might is for >> OpenSBI, as spl_invoke_opensbi is the only function which uses the wait >> parameter. >> >>>> I think the mechanism of available_harts does not provide a method >>>> that guarantees the success of the SMP system. >>>> Maybe we shall think of a better way for the SMP booting or just >>>> remove it ? >>> >>> I haven't experienced any unexplained problem with hart_lottery or >>> available_harts_lock unless: >>> >>> 1) harts are started non-simultaneously >>> 2) SPL/U-Boot is in some kind of TCM, OCRAM, etc... which is not cleared >>> on reset which leaves available_harts dirty >> >> XIP, of course, has this problem every time and just doesn't use the mask. >> I remember thinking a lot about how to deal with this, but I never ended >> up sending a patch because I didn't have a XIP system. >> >> --Sean >> >>> 3) something is wrong with atomics >>> >>> Also there might be something wrong with IPI send/recieve. >>> >>>> >>>> Thread 8 hit Breakpoint 1, harts_early_init () >>>> >>>> (gdb) c >>>> Continuing. >>>> [Switching to Thread 7] >>>> >>>> Thread 7 hit Breakpoint 1, harts_early_init () >>>> >>>> (gdb) >>>> Continuing. >>>> [Switching to Thread 6] >>>> >>>> Thread 6 hit Breakpoint 1, harts_early_init () >>>> >>>> (gdb) >>>> Continuing. >>>> [Switching to Thread 5] >>>> >>>> Thread 5 hit Breakpoint 1, harts_early_init () >>>> >>>> (gdb) >>>> Continuing. >>>> [Switching to Thread 4] >>>> >>>> Thread 4 hit Breakpoint 1, harts_early_init () >>>> >>>> (gdb) >>>> Continuing. >>>> [Switching to Thread 3] >>>> >>>> Thread 3 hit Breakpoint 1, harts_early_init () >>>> (gdb) >>>> Continuing. >>>> [Switching to Thread 2] >>>> >>>> Thread 2 hit Breakpoint 1, harts_early_init () >>>> (gdb) >>>> Continuing. >>>> [Switching to Thread 1] >>>> >>>> Thread 1 hit Breakpoint 1, harts_early_init () >>>> (gdb) >>>> Continuing. >>>> [Switching to Thread 5] >>>> >>>> >>>> Thread 5 hit Breakpoint 3, 0x0000000001200000 in ?? () >>>> (gdb) info threads >>>> Id Target Id Frame >>>> 1 Thread 1 (hart 1) secondary_hart_loop () at >>>> arch/riscv/cpu/start.S:436 2 Thread 2 (hart 2) secondary_hart_loop >>>> () at arch/riscv/cpu/start.S:436 3 Thread 3 (hart 3) >>>> secondary_hart_loop () at arch/riscv/cpu/start.S:436 4 Thread 4 >>>> (hart 4) secondary_hart_loop () at arch/riscv/cpu/start.S:436 >>>> * 5 Thread 5 (hart 5) 0x0000000001200000 in ?? () >>>> 6 Thread 6 (hart 6) 0x000000000000b650 in ?? () >>>> 7 Thread 7 (hart 7) 0x000000000000b650 in ?? () >>>> 8 Thread 8 (hart 8) 0x0000000000005fa0 in ?? () >>>> (gdb) c >>>> Continuing. >>> >>> Do they all "offline" harts remain in SPL/U-Boot secondary_hart_loop ? >>> >>>> >>>> >>>> >>>> [ 0.175619] smp: Bringing up secondary CPUs ... >>>> [ 1.230474] CPU1: failed to come online >>>> [ 2.282349] CPU2: failed to come online >>>> [ 3.334394] CPU3: failed to come online >>>> [ 4.386783] CPU4: failed to come online >>>> [ 4.427829] smp: Brought up 1 node, 4 CPUs >>>> >>>> >>>> /root # cat /proc/cpuinfo >>>> processor : 0 >>>> hart : 4 >>>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>>> mmu : sv39 >>>> >>>> processor : 5 >>>> hart : 5 >>>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>>> mmu : sv39 >>>> >>>> processor : 6 >>>> hart : 6 >>>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>>> mmu : sv39 >>>> >>>> processor : 7 >>>> hart : 7 >>>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>>> mmu : sv39 >>>> >>>> /root # >>>> >>>> Thanks, >>>> Rick >>> >> > ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RISCV: the machanism of available_harts may cause other harts boot failure 2022-09-05 15:45 ` Sean Anderson @ 2022-09-05 16:00 ` Heinrich Schuchardt 2022-09-05 16:14 ` Sean Anderson 0 siblings, 1 reply; 10+ messages in thread From: Heinrich Schuchardt @ 2022-09-05 16:00 UTC (permalink / raw) To: Sean Anderson Cc: Lukas Auer, U-Boot Mailing List, Atish Patra, Anup Patel, Bin Meng, Leo Liang, rick, Nikita Shubin, Rick Chen On 9/5/22 17:45, Sean Anderson wrote: > On 9/5/22 11:41 AM, Heinrich Schuchardt wrote: >> On 9/5/22 17:30, Sean Anderson wrote: >>> On 9/5/22 3:47 AM, Nikita Shubin wrote: >>>> Hi Rick! >>>> >>>> On Mon, 5 Sep 2022 14:22:41 +0800 >>>> Rick Chen <rickchen36@gmail.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> When I free-run a SMP system, I once hit a failure case where some >>>>> harts didn't boot to the kernel shell successfully. >>>>> However it can't be duplicated anymore even if I try many times. >>>>> >>>>> But when I set a break during debugging with GDB, it can trigger the >>>>> failure case each time. >>>> >>>> If hart fails to register itself to available_harts before >>>> send_ipi_many is hit by the main hart: >>>> https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/lib/smp.c#L50 >>>> >>>> it won't exit the secondary_hart_loop: >>>> https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/cpu/start.S#L433 >>>> As no ipi will be sent to it. >> >> Can we call send_ipi_many() again when booting? > > AFAIK we do; see arch/riscv/lib/bootm.c and arch/riscv/lib/spl.c arch/riscv/lib/bootm.c:99: ret = smp_call_function(images->ep, This has no effect when booting via UEFI. Should efi_exit_boot_services() call the function? Best regards Heinrich > >> Do we need to call it before booting? > > Yes. We also call it when relocating (in SPL and U-Boot proper). > >>>> >>>> This might be exactly your case. >>> >>> When working on the IPI mechanism, I considered this possibility. >>> However, >>> there's really no way to know how long to wait. On normal systems, >>> the boot >>> hart is going to do a lot of work before calling send_ipi_many, and the >>> other harts just have to make it through ~100 instructions. So I >>> figured we >>> would never run into this issue. >>> >>> We might not even need the mask... the only direct reason we might is >>> for >>> OpenSBI, as spl_invoke_opensbi is the only function which uses the wait >>> parameter. >>> >>>>> I think the mechanism of available_harts does not provide a method >>>>> that guarantees the success of the SMP system. >>>>> Maybe we shall think of a better way for the SMP booting or just >>>>> remove it ? >>>> >>>> I haven't experienced any unexplained problem with hart_lottery or >>>> available_harts_lock unless: >>>> >>>> 1) harts are started non-simultaneously >>>> 2) SPL/U-Boot is in some kind of TCM, OCRAM, etc... which is not >>>> cleared >>>> on reset which leaves available_harts dirty >>> >>> XIP, of course, has this problem every time and just doesn't use the >>> mask. >>> I remember thinking a lot about how to deal with this, but I never ended >>> up sending a patch because I didn't have a XIP system. >>> >>> --Sean >>> >>>> 3) something is wrong with atomics >>>> >>>> Also there might be something wrong with IPI send/recieve. >>>> >>>>> >>>>> Thread 8 hit Breakpoint 1, harts_early_init () >>>>> >>>>> (gdb) c >>>>> Continuing. >>>>> [Switching to Thread 7] >>>>> >>>>> Thread 7 hit Breakpoint 1, harts_early_init () >>>>> >>>>> (gdb) >>>>> Continuing. >>>>> [Switching to Thread 6] >>>>> >>>>> Thread 6 hit Breakpoint 1, harts_early_init () >>>>> >>>>> (gdb) >>>>> Continuing. >>>>> [Switching to Thread 5] >>>>> >>>>> Thread 5 hit Breakpoint 1, harts_early_init () >>>>> >>>>> (gdb) >>>>> Continuing. >>>>> [Switching to Thread 4] >>>>> >>>>> Thread 4 hit Breakpoint 1, harts_early_init () >>>>> >>>>> (gdb) >>>>> Continuing. >>>>> [Switching to Thread 3] >>>>> >>>>> Thread 3 hit Breakpoint 1, harts_early_init () >>>>> (gdb) >>>>> Continuing. >>>>> [Switching to Thread 2] >>>>> >>>>> Thread 2 hit Breakpoint 1, harts_early_init () >>>>> (gdb) >>>>> Continuing. >>>>> [Switching to Thread 1] >>>>> >>>>> Thread 1 hit Breakpoint 1, harts_early_init () >>>>> (gdb) >>>>> Continuing. >>>>> [Switching to Thread 5] >>>>> >>>>> >>>>> Thread 5 hit Breakpoint 3, 0x0000000001200000 in ?? () >>>>> (gdb) info threads >>>>> Id Target Id Frame >>>>> 1 Thread 1 (hart 1) secondary_hart_loop () at >>>>> arch/riscv/cpu/start.S:436 2 Thread 2 (hart 2) secondary_hart_loop >>>>> () at arch/riscv/cpu/start.S:436 3 Thread 3 (hart 3) >>>>> secondary_hart_loop () at arch/riscv/cpu/start.S:436 4 Thread 4 >>>>> (hart 4) secondary_hart_loop () at arch/riscv/cpu/start.S:436 >>>>> * 5 Thread 5 (hart 5) 0x0000000001200000 in ?? () >>>>> 6 Thread 6 (hart 6) 0x000000000000b650 in ?? () >>>>> 7 Thread 7 (hart 7) 0x000000000000b650 in ?? () >>>>> 8 Thread 8 (hart 8) 0x0000000000005fa0 in ?? () >>>>> (gdb) c >>>>> Continuing. >>>> >>>> Do they all "offline" harts remain in SPL/U-Boot secondary_hart_loop ? >>>> >>>>> >>>>> >>>>> >>>>> [ 0.175619] smp: Bringing up secondary CPUs ... >>>>> [ 1.230474] CPU1: failed to come online >>>>> [ 2.282349] CPU2: failed to come online >>>>> [ 3.334394] CPU3: failed to come online >>>>> [ 4.386783] CPU4: failed to come online >>>>> [ 4.427829] smp: Brought up 1 node, 4 CPUs >>>>> >>>>> >>>>> /root # cat /proc/cpuinfo >>>>> processor : 0 >>>>> hart : 4 >>>>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>>>> mmu : sv39 >>>>> >>>>> processor : 5 >>>>> hart : 5 >>>>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>>>> mmu : sv39 >>>>> >>>>> processor : 6 >>>>> hart : 6 >>>>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>>>> mmu : sv39 >>>>> >>>>> processor : 7 >>>>> hart : 7 >>>>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>>>> mmu : sv39 >>>>> >>>>> /root # >>>>> >>>>> Thanks, >>>>> Rick >>>> >>> >> > > ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RISCV: the machanism of available_harts may cause other harts boot failure 2022-09-05 16:00 ` Heinrich Schuchardt @ 2022-09-05 16:14 ` Sean Anderson 2022-09-05 16:30 ` Heinrich Schuchardt 0 siblings, 1 reply; 10+ messages in thread From: Sean Anderson @ 2022-09-05 16:14 UTC (permalink / raw) To: Heinrich Schuchardt Cc: Lukas Auer, U-Boot Mailing List, Atish Patra, Anup Patel, Bin Meng, Leo Liang, rick, Nikita Shubin, Rick Chen On 9/5/22 12:00 PM, Heinrich Schuchardt wrote: > On 9/5/22 17:45, Sean Anderson wrote: >> On 9/5/22 11:41 AM, Heinrich Schuchardt wrote: >>> On 9/5/22 17:30, Sean Anderson wrote: >>>> On 9/5/22 3:47 AM, Nikita Shubin wrote: >>>>> Hi Rick! >>>>> >>>>> On Mon, 5 Sep 2022 14:22:41 +0800 >>>>> Rick Chen <rickchen36@gmail.com> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> When I free-run a SMP system, I once hit a failure case where some >>>>>> harts didn't boot to the kernel shell successfully. >>>>>> However it can't be duplicated anymore even if I try many times. >>>>>> >>>>>> But when I set a break during debugging with GDB, it can trigger the >>>>>> failure case each time. >>>>> >>>>> If hart fails to register itself to available_harts before >>>>> send_ipi_many is hit by the main hart: >>>>> https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/lib/smp.c#L50 >>>>> >>>>> it won't exit the secondary_hart_loop: >>>>> https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/cpu/start.S#L433 >>>>> As no ipi will be sent to it. >>> >>> Can we call send_ipi_many() again when booting? >> >> AFAIK we do; see arch/riscv/lib/bootm.c and arch/riscv/lib/spl.c > > arch/riscv/lib/bootm.c:99: > ret = smp_call_function(images->ep, > > This has no effect when booting via UEFI. How do you figure? > Should efi_exit_boot_services() call the function? Generally, this needs to be called when secondary_hart_loop is going to be overwritten. This can either be because U-Boot is relocating (and so something else may occupy the space where it used to be), or we are executing the next stage of boot (which may then reuse the memory occupied by secondary_hart_loop for something else). AIUI the EFI client?/payload? gets started by U-Boot, which sticks around providing services. I would expect the initial jump to the EFI payload to cause the secondary harts to jump there as well. --Sean > Best regards > > Heinrich > >> >>> Do we need to call it before booting? >> >> Yes. We also call it when relocating (in SPL and U-Boot proper). >> >>>>> >>>>> This might be exactly your case. >>>> >>>> When working on the IPI mechanism, I considered this possibility. >>>> However, >>>> there's really no way to know how long to wait. On normal systems, >>>> the boot >>>> hart is going to do a lot of work before calling send_ipi_many, and the >>>> other harts just have to make it through ~100 instructions. So I >>>> figured we >>>> would never run into this issue. >>>> >>>> We might not even need the mask... the only direct reason we might is >>>> for >>>> OpenSBI, as spl_invoke_opensbi is the only function which uses the wait >>>> parameter. >>>> >>>>>> I think the mechanism of available_harts does not provide a method >>>>>> that guarantees the success of the SMP system. >>>>>> Maybe we shall think of a better way for the SMP booting or just >>>>>> remove it ? >>>>> >>>>> I haven't experienced any unexplained problem with hart_lottery or >>>>> available_harts_lock unless: >>>>> >>>>> 1) harts are started non-simultaneously >>>>> 2) SPL/U-Boot is in some kind of TCM, OCRAM, etc... which is not >>>>> cleared >>>>> on reset which leaves available_harts dirty >>>> >>>> XIP, of course, has this problem every time and just doesn't use the >>>> mask. >>>> I remember thinking a lot about how to deal with this, but I never ended >>>> up sending a patch because I didn't have a XIP system. >>>> >>>> --Sean >>>> >>>>> 3) something is wrong with atomics >>>>> >>>>> Also there might be something wrong with IPI send/recieve. >>>>> >>>>>> >>>>>> Thread 8 hit Breakpoint 1, harts_early_init () >>>>>> >>>>>> (gdb) c >>>>>> Continuing. >>>>>> [Switching to Thread 7] >>>>>> >>>>>> Thread 7 hit Breakpoint 1, harts_early_init () >>>>>> >>>>>> (gdb) >>>>>> Continuing. >>>>>> [Switching to Thread 6] >>>>>> >>>>>> Thread 6 hit Breakpoint 1, harts_early_init () >>>>>> >>>>>> (gdb) >>>>>> Continuing. >>>>>> [Switching to Thread 5] >>>>>> >>>>>> Thread 5 hit Breakpoint 1, harts_early_init () >>>>>> >>>>>> (gdb) >>>>>> Continuing. >>>>>> [Switching to Thread 4] >>>>>> >>>>>> Thread 4 hit Breakpoint 1, harts_early_init () >>>>>> >>>>>> (gdb) >>>>>> Continuing. >>>>>> [Switching to Thread 3] >>>>>> >>>>>> Thread 3 hit Breakpoint 1, harts_early_init () >>>>>> (gdb) >>>>>> Continuing. >>>>>> [Switching to Thread 2] >>>>>> >>>>>> Thread 2 hit Breakpoint 1, harts_early_init () >>>>>> (gdb) >>>>>> Continuing. >>>>>> [Switching to Thread 1] >>>>>> >>>>>> Thread 1 hit Breakpoint 1, harts_early_init () >>>>>> (gdb) >>>>>> Continuing. >>>>>> [Switching to Thread 5] >>>>>> >>>>>> >>>>>> Thread 5 hit Breakpoint 3, 0x0000000001200000 in ?? () >>>>>> (gdb) info threads >>>>>> Id Target Id Frame >>>>>> 1 Thread 1 (hart 1) secondary_hart_loop () at >>>>>> arch/riscv/cpu/start.S:436 2 Thread 2 (hart 2) secondary_hart_loop >>>>>> () at arch/riscv/cpu/start.S:436 3 Thread 3 (hart 3) >>>>>> secondary_hart_loop () at arch/riscv/cpu/start.S:436 4 Thread 4 >>>>>> (hart 4) secondary_hart_loop () at arch/riscv/cpu/start.S:436 >>>>>> * 5 Thread 5 (hart 5) 0x0000000001200000 in ?? () >>>>>> 6 Thread 6 (hart 6) 0x000000000000b650 in ?? () >>>>>> 7 Thread 7 (hart 7) 0x000000000000b650 in ?? () >>>>>> 8 Thread 8 (hart 8) 0x0000000000005fa0 in ?? () >>>>>> (gdb) c >>>>>> Continuing. >>>>> >>>>> Do they all "offline" harts remain in SPL/U-Boot secondary_hart_loop ? >>>>> >>>>>> >>>>>> >>>>>> >>>>>> [ 0.175619] smp: Bringing up secondary CPUs ... >>>>>> [ 1.230474] CPU1: failed to come online >>>>>> [ 2.282349] CPU2: failed to come online >>>>>> [ 3.334394] CPU3: failed to come online >>>>>> [ 4.386783] CPU4: failed to come online >>>>>> [ 4.427829] smp: Brought up 1 node, 4 CPUs >>>>>> >>>>>> >>>>>> /root # cat /proc/cpuinfo >>>>>> processor : 0 >>>>>> hart : 4 >>>>>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>>>>> mmu : sv39 >>>>>> >>>>>> processor : 5 >>>>>> hart : 5 >>>>>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>>>>> mmu : sv39 >>>>>> >>>>>> processor : 6 >>>>>> hart : 6 >>>>>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>>>>> mmu : sv39 >>>>>> >>>>>> processor : 7 >>>>>> hart : 7 >>>>>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>>>>> mmu : sv39 >>>>>> >>>>>> /root # >>>>>> >>>>>> Thanks, >>>>>> Rick >>>>> >>>> >>> >> >> > ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RISCV: the machanism of available_harts may cause other harts boot failure 2022-09-05 16:14 ` Sean Anderson @ 2022-09-05 16:30 ` Heinrich Schuchardt 0 siblings, 0 replies; 10+ messages in thread From: Heinrich Schuchardt @ 2022-09-05 16:30 UTC (permalink / raw) To: Sean Anderson Cc: Lukas Auer, U-Boot Mailing List, Atish Patra, Anup Patel, Bin Meng, Leo Liang, rick, Nikita Shubin, Rick Chen On 9/5/22 18:14, Sean Anderson wrote: > On 9/5/22 12:00 PM, Heinrich Schuchardt wrote: >> On 9/5/22 17:45, Sean Anderson wrote: >>> On 9/5/22 11:41 AM, Heinrich Schuchardt wrote: >>>> On 9/5/22 17:30, Sean Anderson wrote: >>>>> On 9/5/22 3:47 AM, Nikita Shubin wrote: >>>>>> Hi Rick! >>>>>> >>>>>> On Mon, 5 Sep 2022 14:22:41 +0800 >>>>>> Rick Chen <rickchen36@gmail.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> When I free-run a SMP system, I once hit a failure case where some >>>>>>> harts didn't boot to the kernel shell successfully. >>>>>>> However it can't be duplicated anymore even if I try many times. >>>>>>> >>>>>>> But when I set a break during debugging with GDB, it can trigger the >>>>>>> failure case each time. >>>>>> >>>>>> If hart fails to register itself to available_harts before >>>>>> send_ipi_many is hit by the main hart: >>>>>> https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/lib/smp.c#L50 >>>>>> >>>>>> it won't exit the secondary_hart_loop: >>>>>> https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/cpu/start.S#L433 >>>>>> As no ipi will be sent to it. >>>> >>>> Can we call send_ipi_many() again when booting? >>> >>> AFAIK we do; see arch/riscv/lib/bootm.c and arch/riscv/lib/spl.c >> >> arch/riscv/lib/bootm.c:99: >> ret = smp_call_function(images->ep, >> >> This has no effect when booting via UEFI. > > How do you figure? U-Boot never calls the legacy entry point when booting via UEFI. > >> Should efi_exit_boot_services() call the function? > > Generally, this needs to be called when secondary_hart_loop is going to be > overwritten. This can either be because U-Boot is relocating (and so > something > else may occupy the space where it used to be), or we are executing the > next > stage of boot (which may then reuse the memory occupied by > secondary_hart_loop > for something else). > > AIUI the EFI client?/payload? gets started by U-Boot, which sticks around > providing services. I would expect the initial jump to the EFI payload > to cause > the secondary harts to jump there as well. Secondary harts never enter UEFI payloads. Best regards Heinrich > > --Sean > >> Best regards >> >> Heinrich >> >>> >>>> Do we need to call it before booting? >>> >>> Yes. We also call it when relocating (in SPL and U-Boot proper). >>> >>>>>> >>>>>> This might be exactly your case. >>>>> >>>>> When working on the IPI mechanism, I considered this possibility. >>>>> However, >>>>> there's really no way to know how long to wait. On normal systems, >>>>> the boot >>>>> hart is going to do a lot of work before calling send_ipi_many, and >>>>> the >>>>> other harts just have to make it through ~100 instructions. So I >>>>> figured we >>>>> would never run into this issue. >>>>> >>>>> We might not even need the mask... the only direct reason we might is >>>>> for >>>>> OpenSBI, as spl_invoke_opensbi is the only function which uses the >>>>> wait >>>>> parameter. >>>>> >>>>>>> I think the mechanism of available_harts does not provide a method >>>>>>> that guarantees the success of the SMP system. >>>>>>> Maybe we shall think of a better way for the SMP booting or just >>>>>>> remove it ? >>>>>> >>>>>> I haven't experienced any unexplained problem with hart_lottery or >>>>>> available_harts_lock unless: >>>>>> >>>>>> 1) harts are started non-simultaneously >>>>>> 2) SPL/U-Boot is in some kind of TCM, OCRAM, etc... which is not >>>>>> cleared >>>>>> on reset which leaves available_harts dirty >>>>> >>>>> XIP, of course, has this problem every time and just doesn't use the >>>>> mask. >>>>> I remember thinking a lot about how to deal with this, but I never >>>>> ended >>>>> up sending a patch because I didn't have a XIP system. >>>>> >>>>> --Sean >>>>> >>>>>> 3) something is wrong with atomics >>>>>> >>>>>> Also there might be something wrong with IPI send/recieve. >>>>>> >>>>>>> >>>>>>> Thread 8 hit Breakpoint 1, harts_early_init () >>>>>>> >>>>>>> (gdb) c >>>>>>> Continuing. >>>>>>> [Switching to Thread 7] >>>>>>> >>>>>>> Thread 7 hit Breakpoint 1, harts_early_init () >>>>>>> >>>>>>> (gdb) >>>>>>> Continuing. >>>>>>> [Switching to Thread 6] >>>>>>> >>>>>>> Thread 6 hit Breakpoint 1, harts_early_init () >>>>>>> >>>>>>> (gdb) >>>>>>> Continuing. >>>>>>> [Switching to Thread 5] >>>>>>> >>>>>>> Thread 5 hit Breakpoint 1, harts_early_init () >>>>>>> >>>>>>> (gdb) >>>>>>> Continuing. >>>>>>> [Switching to Thread 4] >>>>>>> >>>>>>> Thread 4 hit Breakpoint 1, harts_early_init () >>>>>>> >>>>>>> (gdb) >>>>>>> Continuing. >>>>>>> [Switching to Thread 3] >>>>>>> >>>>>>> Thread 3 hit Breakpoint 1, harts_early_init () >>>>>>> (gdb) >>>>>>> Continuing. >>>>>>> [Switching to Thread 2] >>>>>>> >>>>>>> Thread 2 hit Breakpoint 1, harts_early_init () >>>>>>> (gdb) >>>>>>> Continuing. >>>>>>> [Switching to Thread 1] >>>>>>> >>>>>>> Thread 1 hit Breakpoint 1, harts_early_init () >>>>>>> (gdb) >>>>>>> Continuing. >>>>>>> [Switching to Thread 5] >>>>>>> >>>>>>> >>>>>>> Thread 5 hit Breakpoint 3, 0x0000000001200000 in ?? () >>>>>>> (gdb) info threads >>>>>>> Id Target Id Frame >>>>>>> 1 Thread 1 (hart 1) secondary_hart_loop () at >>>>>>> arch/riscv/cpu/start.S:436 2 Thread 2 (hart 2) >>>>>>> secondary_hart_loop >>>>>>> () at arch/riscv/cpu/start.S:436 3 Thread 3 (hart 3) >>>>>>> secondary_hart_loop () at arch/riscv/cpu/start.S:436 4 Thread 4 >>>>>>> (hart 4) secondary_hart_loop () at arch/riscv/cpu/start.S:436 >>>>>>> * 5 Thread 5 (hart 5) 0x0000000001200000 in ?? () >>>>>>> 6 Thread 6 (hart 6) 0x000000000000b650 in ?? () >>>>>>> 7 Thread 7 (hart 7) 0x000000000000b650 in ?? () >>>>>>> 8 Thread 8 (hart 8) 0x0000000000005fa0 in ?? () >>>>>>> (gdb) c >>>>>>> Continuing. >>>>>> >>>>>> Do they all "offline" harts remain in SPL/U-Boot >>>>>> secondary_hart_loop ? >>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> [ 0.175619] smp: Bringing up secondary CPUs ... >>>>>>> [ 1.230474] CPU1: failed to come online >>>>>>> [ 2.282349] CPU2: failed to come online >>>>>>> [ 3.334394] CPU3: failed to come online >>>>>>> [ 4.386783] CPU4: failed to come online >>>>>>> [ 4.427829] smp: Brought up 1 node, 4 CPUs >>>>>>> >>>>>>> >>>>>>> /root # cat /proc/cpuinfo >>>>>>> processor : 0 >>>>>>> hart : 4 >>>>>>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>>>>>> mmu : sv39 >>>>>>> >>>>>>> processor : 5 >>>>>>> hart : 5 >>>>>>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>>>>>> mmu : sv39 >>>>>>> >>>>>>> processor : 6 >>>>>>> hart : 6 >>>>>>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>>>>>> mmu : sv39 >>>>>>> >>>>>>> processor : 7 >>>>>>> hart : 7 >>>>>>> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 >>>>>>> mmu : sv39 >>>>>>> >>>>>>> /root # >>>>>>> >>>>>>> Thanks, >>>>>>> Rick >>>>>> >>>>> >>>> >>> >>> >> > > ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RISCV: the machanism of available_harts may cause other harts boot failure 2022-09-05 15:30 ` Sean Anderson 2022-09-05 15:41 ` Heinrich Schuchardt @ 2022-09-05 17:10 ` Nikita Shubin 2022-09-06 1:51 ` Rick Chen 1 sibling, 1 reply; 10+ messages in thread From: Nikita Shubin @ 2022-09-05 17:10 UTC (permalink / raw) To: Sean Anderson Cc: Rick Chen, Lukas Auer, U-Boot Mailing List, Heinrich Schuchardt, Atish Patra, Anup Patel, Bin Meng, Leo Liang, rick On Mon, 5 Sep 2022 11:30:38 -0400 Sean Anderson <seanga2@gmail.com> wrote: > On 9/5/22 3:47 AM, Nikita Shubin wrote: > > Hi Rick! > > > > On Mon, 5 Sep 2022 14:22:41 +0800 > > Rick Chen <rickchen36@gmail.com> wrote: > > > >> Hi, > >> > >> When I free-run a SMP system, I once hit a failure case where some > >> harts didn't boot to the kernel shell successfully. > >> However it can't be duplicated anymore even if I try many times. > >> > >> But when I set a break during debugging with GDB, it can trigger > >> the failure case each time. > > > > If hart fails to register itself to available_harts before > > send_ipi_many is hit by the main hart: > > https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/lib/smp.c#L50 > > > > it won't exit the secondary_hart_loop: > > https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/cpu/start.S#L433 > > As no ipi will be sent to it. > > > > This might be exactly your case. > > When working on the IPI mechanism, I considered this possibility. > However, there's really no way to know how long to wait. On normal > systems, the boot hart is going to do a lot of work before calling > send_ipi_many, and the other harts just have to make it through ~100 > instructions. So I figured we would never run into this issue. > > We might not even need the mask... the only direct reason we might is > for OpenSBI, as spl_invoke_opensbi is the only function which uses > the wait parameter. Actually i think available_harts in is duplicated by device tree, so we can: 1) drop registering harts in start.S (and related lock completely) 2) fill gd->arch.available_harts in send_ipi_many relying on device tree, and also making riscv_send_ipi non-fatal 3) move this procedure to the very end just before spl_invoke_opensbi 4) may be even wrap all above in some CONFIG option which enforces checking that harts are alive, otherwise just pass the device tree harts count > > >> I think the mechanism of available_harts does not provide a method > >> that guarantees the success of the SMP system. > >> Maybe we shall think of a better way for the SMP booting or just > >> remove it ? > > > > I haven't experienced any unexplained problem with hart_lottery or > > available_harts_lock unless: > > > > 1) harts are started non-simultaneously > > 2) SPL/U-Boot is in some kind of TCM, OCRAM, etc... which is not > > cleared on reset which leaves available_harts dirty > > XIP, of course, has this problem every time and just doesn't use the > mask. I remember thinking a lot about how to deal with this, but I > never ended up sending a patch because I didn't have a XIP system. It can be in some part emulated by setting up SPL region as read-only via PMP before start. > > --Sean > > > 3) something is wrong with atomics > > > > Also there might be something wrong with IPI send/recieve. > > > >> > >> Thread 8 hit Breakpoint 1, harts_early_init () > >> > >> (gdb) c > >> Continuing. > >> [Switching to Thread 7] > >> > >> Thread 7 hit Breakpoint 1, harts_early_init () > >> > >> (gdb) > >> Continuing. > >> [Switching to Thread 6] > >> > >> Thread 6 hit Breakpoint 1, harts_early_init () > >> > >> (gdb) > >> Continuing. > >> [Switching to Thread 5] > >> > >> Thread 5 hit Breakpoint 1, harts_early_init () > >> > >> (gdb) > >> Continuing. > >> [Switching to Thread 4] > >> > >> Thread 4 hit Breakpoint 1, harts_early_init () > >> > >> (gdb) > >> Continuing. > >> [Switching to Thread 3] > >> > >> Thread 3 hit Breakpoint 1, harts_early_init () > >> (gdb) > >> Continuing. > >> [Switching to Thread 2] > >> > >> Thread 2 hit Breakpoint 1, harts_early_init () > >> (gdb) > >> Continuing. > >> [Switching to Thread 1] > >> > >> Thread 1 hit Breakpoint 1, harts_early_init () > >> (gdb) > >> Continuing. > >> [Switching to Thread 5] > >> > >> > >> Thread 5 hit Breakpoint 3, 0x0000000001200000 in ?? () > >> (gdb) info threads > >> Id Target Id Frame > >> 1 Thread 1 (hart 1) secondary_hart_loop () at > >> arch/riscv/cpu/start.S:436 2 Thread 2 (hart 2) > >> secondary_hart_loop () at arch/riscv/cpu/start.S:436 3 Thread 3 > >> (hart 3) secondary_hart_loop () at arch/riscv/cpu/start.S:436 4 > >> Thread 4 (hart 4) secondary_hart_loop () at > >> arch/riscv/cpu/start.S:436 > >> * 5 Thread 5 (hart 5) 0x0000000001200000 in ?? () > >> 6 Thread 6 (hart 6) 0x000000000000b650 in ?? () > >> 7 Thread 7 (hart 7) 0x000000000000b650 in ?? () > >> 8 Thread 8 (hart 8) 0x0000000000005fa0 in ?? () > >> (gdb) c > >> Continuing. > > > > Do they all "offline" harts remain in SPL/U-Boot > > secondary_hart_loop ? > >> > >> > >> > >> [ 0.175619] smp: Bringing up secondary CPUs ... > >> [ 1.230474] CPU1: failed to come online > >> [ 2.282349] CPU2: failed to come online > >> [ 3.334394] CPU3: failed to come online > >> [ 4.386783] CPU4: failed to come online > >> [ 4.427829] smp: Brought up 1 node, 4 CPUs > >> > >> > >> /root # cat /proc/cpuinfo > >> processor : 0 > >> hart : 4 > >> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 > >> mmu : sv39 > >> > >> processor : 5 > >> hart : 5 > >> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 > >> mmu : sv39 > >> > >> processor : 6 > >> hart : 6 > >> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 > >> mmu : sv39 > >> > >> processor : 7 > >> hart : 7 > >> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 > >> mmu : sv39 > >> > >> /root # > >> > >> Thanks, > >> Rick > > > ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RISCV: the machanism of available_harts may cause other harts boot failure 2022-09-05 17:10 ` Nikita Shubin @ 2022-09-06 1:51 ` Rick Chen 0 siblings, 0 replies; 10+ messages in thread From: Rick Chen @ 2022-09-06 1:51 UTC (permalink / raw) To: Nikita Shubin Cc: Sean Anderson, Lukas Auer, U-Boot Mailing List, Heinrich Schuchardt, Atish Patra, Anup Patel, Bin Meng, Leo Liang, rick HI all, > On Mon, 5 Sep 2022 11:30:38 -0400 > Sean Anderson <seanga2@gmail.com> wrote: > > > On 9/5/22 3:47 AM, Nikita Shubin wrote: > > > Hi Rick! > > > > > > On Mon, 5 Sep 2022 14:22:41 +0800 > > > Rick Chen <rickchen36@gmail.com> wrote: > > > > > >> Hi, > > >> > > >> When I free-run a SMP system, I once hit a failure case where some > > >> harts didn't boot to the kernel shell successfully. > > >> However it can't be duplicated anymore even if I try many times. > > >> > > >> But when I set a break during debugging with GDB, it can trigger > > >> the failure case each time. > > > > > > If hart fails to register itself to available_harts before > > > send_ipi_many is hit by the main hart: > > > https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/lib/smp.c#L50 > > > > > > it won't exit the secondary_hart_loop: > > > https://elixir.bootlin.com/u-boot/v2022.10-rc3/source/arch/riscv/cpu/start.S#L433 > > > As no ipi will be sent to it. > > > > > > This might be exactly your case. > > > > When working on the IPI mechanism, I considered this possibility. > > However, there's really no way to know how long to wait. On normal > > systems, the boot hart is going to do a lot of work before calling > > send_ipi_many, and the other harts just have to make it through ~100 > > instructions. So I figured we would never run into this issue. > > > > We might not even need the mask... the only direct reason we might is > > for OpenSBI, as spl_invoke_opensbi is the only function which uses > > the wait parameter. > > Actually i think available_harts in is duplicated by device tree, > so we can: > > 1) drop registering harts in start.S (and related lock completely) > 2) fill gd->arch.available_harts in send_ipi_many relying on device > tree, and also making riscv_send_ipi non-fatal > 3) move this procedure to the very end just before spl_invoke_opensbi > 4) may be even wrap all above in some CONFIG option which enforces > checking that harts are alive, otherwise just pass the device tree harts > count Thanks for all of your discussion and advise. I would like to let available_harts become an option by something like CONFIG_SEND_IPI_BY_DTS_CPUS. It can help to avoid the SMP booting failure situation and also will not affect other's platform. #ifndef CONFIG_SEND_IPI_BY_DTS_CPUS /* skip if hart is not available */ if (!(gd->arch.available_harts & (1 << reg))) continue; #endif Any opinions ? Thanks, Rick > > > > > >> I think the mechanism of available_harts does not provide a method > > >> that guarantees the success of the SMP system. > > >> Maybe we shall think of a better way for the SMP booting or just > > >> remove it ? > > > > > > I haven't experienced any unexplained problem with hart_lottery or > > > available_harts_lock unless: > > > > > > 1) harts are started non-simultaneously > > > 2) SPL/U-Boot is in some kind of TCM, OCRAM, etc... which is not > > > cleared on reset which leaves available_harts dirty > > > > XIP, of course, has this problem every time and just doesn't use the > > mask. I remember thinking a lot about how to deal with this, but I > > never ended up sending a patch because I didn't have a XIP system. > > It can be in some part emulated by setting up SPL region as > read-only via PMP before start. > > > > > --Sean > > > > > 3) something is wrong with atomics > > > > > > Also there might be something wrong with IPI send/recieve. > > > > > >> > > >> Thread 8 hit Breakpoint 1, harts_early_init () > > >> > > >> (gdb) c > > >> Continuing. > > >> [Switching to Thread 7] > > >> > > >> Thread 7 hit Breakpoint 1, harts_early_init () > > >> > > >> (gdb) > > >> Continuing. > > >> [Switching to Thread 6] > > >> > > >> Thread 6 hit Breakpoint 1, harts_early_init () > > >> > > >> (gdb) > > >> Continuing. > > >> [Switching to Thread 5] > > >> > > >> Thread 5 hit Breakpoint 1, harts_early_init () > > >> > > >> (gdb) > > >> Continuing. > > >> [Switching to Thread 4] > > >> > > >> Thread 4 hit Breakpoint 1, harts_early_init () > > >> > > >> (gdb) > > >> Continuing. > > >> [Switching to Thread 3] > > >> > > >> Thread 3 hit Breakpoint 1, harts_early_init () > > >> (gdb) > > >> Continuing. > > >> [Switching to Thread 2] > > >> > > >> Thread 2 hit Breakpoint 1, harts_early_init () > > >> (gdb) > > >> Continuing. > > >> [Switching to Thread 1] > > >> > > >> Thread 1 hit Breakpoint 1, harts_early_init () > > >> (gdb) > > >> Continuing. > > >> [Switching to Thread 5] > > >> > > >> > > >> Thread 5 hit Breakpoint 3, 0x0000000001200000 in ?? () > > >> (gdb) info threads > > >> Id Target Id Frame > > >> 1 Thread 1 (hart 1) secondary_hart_loop () at > > >> arch/riscv/cpu/start.S:436 2 Thread 2 (hart 2) > > >> secondary_hart_loop () at arch/riscv/cpu/start.S:436 3 Thread 3 > > >> (hart 3) secondary_hart_loop () at arch/riscv/cpu/start.S:436 4 > > >> Thread 4 (hart 4) secondary_hart_loop () at > > >> arch/riscv/cpu/start.S:436 > > >> * 5 Thread 5 (hart 5) 0x0000000001200000 in ?? () > > >> 6 Thread 6 (hart 6) 0x000000000000b650 in ?? () > > >> 7 Thread 7 (hart 7) 0x000000000000b650 in ?? () > > >> 8 Thread 8 (hart 8) 0x0000000000005fa0 in ?? () > > >> (gdb) c > > >> Continuing. > > > > > > Do they all "offline" harts remain in SPL/U-Boot > > > secondary_hart_loop ? > > >> > > >> > > >> > > >> [ 0.175619] smp: Bringing up secondary CPUs ... > > >> [ 1.230474] CPU1: failed to come online > > >> [ 2.282349] CPU2: failed to come online > > >> [ 3.334394] CPU3: failed to come online > > >> [ 4.386783] CPU4: failed to come online > > >> [ 4.427829] smp: Brought up 1 node, 4 CPUs > > >> > > >> > > >> /root # cat /proc/cpuinfo > > >> processor : 0 > > >> hart : 4 > > >> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 > > >> mmu : sv39 > > >> > > >> processor : 5 > > >> hart : 5 > > >> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 > > >> mmu : sv39 > > >> > > >> processor : 6 > > >> hart : 6 > > >> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 > > >> mmu : sv39 > > >> > > >> processor : 7 > > >> hart : 7 > > >> isa : rv64i2p0m2p0a2p0c2p0xv5-1p1 > > >> mmu : sv39 > > >> > > >> /root # > > >> > > >> Thanks, > > >> Rick > > > > > > ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2022-09-06 1:51 UTC | newest] Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2022-09-05 6:22 RISCV: the machanism of available_harts may cause other harts boot failure Rick Chen 2022-09-05 7:47 ` Nikita Shubin 2022-09-05 15:30 ` Sean Anderson 2022-09-05 15:41 ` Heinrich Schuchardt 2022-09-05 15:45 ` Sean Anderson 2022-09-05 16:00 ` Heinrich Schuchardt 2022-09-05 16:14 ` Sean Anderson 2022-09-05 16:30 ` Heinrich Schuchardt 2022-09-05 17:10 ` Nikita Shubin 2022-09-06 1:51 ` Rick Chen
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.