All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] arm64: entry.S: Do not preempt from IRQ before all cpufeatures are enabled
@ 2019-10-10 16:34 James Morse
  2019-10-14 23:53 ` Will Deacon
  0 siblings, 1 reply; 2+ messages in thread
From: James Morse @ 2019-10-10 16:34 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Julien Thierry, Catalin Marinas, James Morse,
	Will Deacon, Julien Thierry

From: Julien Thierry <julien.thierry@arm.com>

Preempting from IRQ-return means that the task has its PSTATE saved
on the stack, which will get restored when the task is resumed and does
the actual IRQ return.

However, enabling some CPU features requires modifying the PSTATE. This
means that, if a task was scheduled out during an IRQ-return before all
CPU features are enabled, the task might restore a PSTATE that does not
include the feature enablement changes once scheduled back in.

* Task 1:

PAN == 0 ---|                          |---------------
            |                          |<- return from IRQ, PSTATE.PAN = 0
            | <- IRQ                   |
            +--------+ <- preempt()  +--
                                     ^
                                     |
                                     reschedule Task 1, PSTATE.PAN == 1
* Init:
        --------------------+------------------------
                            ^
                            |
                            enable_cpu_features
                            set PSTATE.PAN on all CPUs

Worse than this, since PSTATE is untouched when task switching is done,
a task missing the new bits in PSTATE might affect another task, if both
do direct calls to schedule() (outside of IRQ/exception contexts).

Fix this by preventing preemption on IRQ-return until features are
enabled on all CPUs.

This way the only PSTATE values that are saved on the stack are from
synchronous exceptions. These are expected to be fatal this early, the
exception is BRK for WARN_ON(), but as this uses do_debug_exception()
which keeps IRQs masked, it shouldn't call schedule().

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
[Replaced a really cool hack, with a simpler branch->nop transition,
 expanded commit message with Julien's cover-letter ascii art]
Signed-off-by: James Morse <james.morse@arm.com>
---
I think we don't see this happen because cpufeature enable calls happen
early, when there is not a lot going on. I couldn't hit it when trying.
I believe Julien did, by adding sleep statements(?) to kthread().

If we want to send this to stable, the first feature that depended on this
was PAN:
Fixes: 7209c868600b ("arm64: mm: Set PSTATE.PAN from the cpu_enable_pan() call")


 arch/arm64/kernel/cpufeature.c | 8 ++++++++
 arch/arm64/kernel/entry.S      | 7 +++++++
 2 files changed, 15 insertions(+)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9323bcc40a58..3b8c3f1c6d3f 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2071,6 +2071,14 @@ void __init setup_cpu_features(void)
 			ARCH_DMA_MINALIGN);
 }
 
+void __init arm64_cpufeature_cb_nop(struct alt_instr *alt, __le32 *origptr,
+				    __le32 *updptr, int nr_inst)
+{
+	BUG_ON(nr_inst != 1);
+
+	*updptr = cpu_to_le32(aarch64_insn_gen_nop());
+}
+
 static bool __maybe_unused
 cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
 {
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 84a822748c84..7ab139c44ca5 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -670,6 +670,13 @@ el1_irq:
 	irq_handler
 
 #ifdef CONFIG_PREEMPT
+alternative_cb arm64_cpufeature_cb_nop
+	/*
+	 * Prevent preempt from IRQ until cpufeatures are enabled. This branch
+	 * is replaced by a nop by the callback.
+	 */
+	b	1f
+alternative_cb_end
 	ldr	x24, [tsk, #TSK_TI_PREEMPT]	// get preempt count
 alternative_if ARM64_HAS_IRQ_PRIO_MASKING
 	/*
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] arm64: entry.S: Do not preempt from IRQ before all cpufeatures are enabled
  2019-10-10 16:34 [PATCH] arm64: entry.S: Do not preempt from IRQ before all cpufeatures are enabled James Morse
@ 2019-10-14 23:53 ` Will Deacon
  0 siblings, 0 replies; 2+ messages in thread
From: Will Deacon @ 2019-10-14 23:53 UTC (permalink / raw)
  To: James Morse
  Cc: Mark Rutland, Catalin Marinas, Julien Thierry, linux-arm-kernel,
	Julien Thierry

On Thu, Oct 10, 2019 at 05:34:47PM +0100, James Morse wrote:
> From: Julien Thierry <julien.thierry@arm.com>
> 
> Preempting from IRQ-return means that the task has its PSTATE saved
> on the stack, which will get restored when the task is resumed and does
> the actual IRQ return.
> 
> However, enabling some CPU features requires modifying the PSTATE. This
> means that, if a task was scheduled out during an IRQ-return before all
> CPU features are enabled, the task might restore a PSTATE that does not
> include the feature enablement changes once scheduled back in.
> 
> * Task 1:
> 
> PAN == 0 ---|                          |---------------
>             |                          |<- return from IRQ, PSTATE.PAN = 0
>             | <- IRQ                   |
>             +--------+ <- preempt()  +--
>                                      ^
>                                      |
>                                      reschedule Task 1, PSTATE.PAN == 1
> * Init:
>         --------------------+------------------------
>                             ^
>                             |
>                             enable_cpu_features
>                             set PSTATE.PAN on all CPUs
> 
> Worse than this, since PSTATE is untouched when task switching is done,
> a task missing the new bits in PSTATE might affect another task, if both
> do direct calls to schedule() (outside of IRQ/exception contexts).
> 
> Fix this by preventing preemption on IRQ-return until features are
> enabled on all CPUs.
> 
> This way the only PSTATE values that are saved on the stack are from
> synchronous exceptions. These are expected to be fatal this early, the
> exception is BRK for WARN_ON(), but as this uses do_debug_exception()
> which keeps IRQs masked, it shouldn't call schedule().
> 
> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
> [Replaced a really cool hack, with a simpler branch->nop transition,
>  expanded commit message with Julien's cover-letter ascii art]
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
> I think we don't see this happen because cpufeature enable calls happen
> early, when there is not a lot going on. I couldn't hit it when trying.
> I believe Julien did, by adding sleep statements(?) to kthread().
> 
> If we want to send this to stable, the first feature that depended on this
> was PAN:
> Fixes: 7209c868600b ("arm64: mm: Set PSTATE.PAN from the cpu_enable_pan() call")
> 
> 
>  arch/arm64/kernel/cpufeature.c | 8 ++++++++
>  arch/arm64/kernel/entry.S      | 7 +++++++
>  2 files changed, 15 insertions(+)

[...]

> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 84a822748c84..7ab139c44ca5 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -670,6 +670,13 @@ el1_irq:
>  	irq_handler
>  
>  #ifdef CONFIG_PREEMPT
> +alternative_cb arm64_cpufeature_cb_nop
> +	/*
> +	 * Prevent preempt from IRQ until cpufeatures are enabled. This branch
> +	 * is replaced by a nop by the callback.
> +	 */
> +	b	1f
> +alternative_cb_end

Could we simplify this by intercepting the branch to preempt_schedule_irq()
with a C function that looks at arm64_const_caps_ready?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-10-14 23:53 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-10 16:34 [PATCH] arm64: entry.S: Do not preempt from IRQ before all cpufeatures are enabled James Morse
2019-10-14 23:53 ` Will Deacon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.