All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] arm64: ssbs: Fix context-switch when SSBS instructions are present
@ 2020-02-06 11:34 ` Will Deacon
  0 siblings, 0 replies; 8+ messages in thread
From: Will Deacon @ 2020-02-06 11:34 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, kernel-team, Will Deacon, stable, Marc Zyngier,
	Catalin Marinas, Srinivas Ramana

When all CPUs in the system implement the SSBS instructions, we
advertise this via an HWCAP and allow EL0 to toggle the SSBS field
in PSTATE directly. Consequently, the state of the mitigation is not
accurately tracked by the TIF_SSBD thread flag and the PSTATE value
is authoritative.

Avoid forcing the SSBS field in context-switch on such a system, and
simply rely on the PSTATE register instead.

Cc: <stable@vger.kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Srinivas Ramana <sramana@codeaurora.org>
Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch")
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/process.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index d54586d5b031..45e867f40a7a 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct task_struct *next)
 	if (unlikely(next->flags & PF_KTHREAD))
 		return;
 
+	/*
+	 * If all CPUs implement the SSBS instructions, then we just
+	 * need to context-switch the PSTATE field.
+	 */
+	if (cpu_have_feature(cpu_feature(SSBS)))
+		return;
+
 	/* If the mitigation is enabled, then we leave SSBS clear. */
 	if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
 	    test_tsk_thread_flag(next, TIF_SSBD))
-- 
2.25.0.341.g760bfbb309-goog


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH] arm64: ssbs: Fix context-switch when SSBS instructions are present
@ 2020-02-06 11:34 ` Will Deacon
  0 siblings, 0 replies; 8+ messages in thread
From: Will Deacon @ 2020-02-06 11:34 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, Marc Zyngier, Srinivas Ramana, kernel-team, stable,
	Catalin Marinas, Will Deacon

When all CPUs in the system implement the SSBS instructions, we
advertise this via an HWCAP and allow EL0 to toggle the SSBS field
in PSTATE directly. Consequently, the state of the mitigation is not
accurately tracked by the TIF_SSBD thread flag and the PSTATE value
is authoritative.

Avoid forcing the SSBS field in context-switch on such a system, and
simply rely on the PSTATE register instead.

Cc: <stable@vger.kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Srinivas Ramana <sramana@codeaurora.org>
Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch")
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/process.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index d54586d5b031..45e867f40a7a 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct task_struct *next)
 	if (unlikely(next->flags & PF_KTHREAD))
 		return;
 
+	/*
+	 * If all CPUs implement the SSBS instructions, then we just
+	 * need to context-switch the PSTATE field.
+	 */
+	if (cpu_have_feature(cpu_feature(SSBS)))
+		return;
+
 	/* If the mitigation is enabled, then we leave SSBS clear. */
 	if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
 	    test_tsk_thread_flag(next, TIF_SSBD))
-- 
2.25.0.341.g760bfbb309-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] arm64: ssbs: Fix context-switch when SSBS instructions are present
  2020-02-06 11:34 ` Will Deacon
@ 2020-02-06 11:49   ` Marc Zyngier
  -1 siblings, 0 replies; 8+ messages in thread
From: Marc Zyngier @ 2020-02-06 11:49 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, mark.rutland, kernel-team, stable,
	Catalin Marinas, Srinivas Ramana

On 2020-02-06 11:34, Will Deacon wrote:
> When all CPUs in the system implement the SSBS instructions, we
> advertise this via an HWCAP and allow EL0 to toggle the SSBS field
> in PSTATE directly. Consequently, the state of the mitigation is not
> accurately tracked by the TIF_SSBD thread flag and the PSTATE value
> is authoritative.
> 
> Avoid forcing the SSBS field in context-switch on such a system, and
> simply rely on the PSTATE register instead.
> 
> Cc: <stable@vger.kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Srinivas Ramana <sramana@codeaurora.org>
> Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch")
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/kernel/process.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index d54586d5b031..45e867f40a7a 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct task_struct 
> *next)
>  	if (unlikely(next->flags & PF_KTHREAD))
>  		return;
> 
> +	/*
> +	 * If all CPUs implement the SSBS instructions, then we just
> +	 * need to context-switch the PSTATE field.
> +	 */
> +	if (cpu_have_feature(cpu_feature(SSBS)))
> +		return;
> +
>  	/* If the mitigation is enabled, then we leave SSBS clear. */
>  	if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
>  	    test_tsk_thread_flag(next, TIF_SSBD))

Looks goot to me.

Reviewed-by: Marc Zyngier <maz@kernel.org>

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] arm64: ssbs: Fix context-switch when SSBS instructions are present
@ 2020-02-06 11:49   ` Marc Zyngier
  0 siblings, 0 replies; 8+ messages in thread
From: Marc Zyngier @ 2020-02-06 11:49 UTC (permalink / raw)
  To: Will Deacon
  Cc: mark.rutland, Srinivas Ramana, Catalin Marinas, stable,
	kernel-team, linux-arm-kernel

On 2020-02-06 11:34, Will Deacon wrote:
> When all CPUs in the system implement the SSBS instructions, we
> advertise this via an HWCAP and allow EL0 to toggle the SSBS field
> in PSTATE directly. Consequently, the state of the mitigation is not
> accurately tracked by the TIF_SSBD thread flag and the PSTATE value
> is authoritative.
> 
> Avoid forcing the SSBS field in context-switch on such a system, and
> simply rely on the PSTATE register instead.
> 
> Cc: <stable@vger.kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Srinivas Ramana <sramana@codeaurora.org>
> Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch")
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/kernel/process.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index d54586d5b031..45e867f40a7a 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct task_struct 
> *next)
>  	if (unlikely(next->flags & PF_KTHREAD))
>  		return;
> 
> +	/*
> +	 * If all CPUs implement the SSBS instructions, then we just
> +	 * need to context-switch the PSTATE field.
> +	 */
> +	if (cpu_have_feature(cpu_feature(SSBS)))
> +		return;
> +
>  	/* If the mitigation is enabled, then we leave SSBS clear. */
>  	if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
>  	    test_tsk_thread_flag(next, TIF_SSBD))

Looks goot to me.

Reviewed-by: Marc Zyngier <maz@kernel.org>

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] arm64: ssbs: Fix context-switch when SSBS instructions are present
  2020-02-06 11:49   ` Marc Zyngier
@ 2020-02-06 12:20     ` Will Deacon
  -1 siblings, 0 replies; 8+ messages in thread
From: Will Deacon @ 2020-02-06 12:20 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, mark.rutland, kernel-team, stable,
	Catalin Marinas, Srinivas Ramana

On Thu, Feb 06, 2020 at 11:49:31AM +0000, Marc Zyngier wrote:
> On 2020-02-06 11:34, Will Deacon wrote:
> > When all CPUs in the system implement the SSBS instructions, we
> > advertise this via an HWCAP and allow EL0 to toggle the SSBS field
> > in PSTATE directly. Consequently, the state of the mitigation is not
> > accurately tracked by the TIF_SSBD thread flag and the PSTATE value
> > is authoritative.
> > 
> > Avoid forcing the SSBS field in context-switch on such a system, and
> > simply rely on the PSTATE register instead.
> > 
> > Cc: <stable@vger.kernel.org>
> > Cc: Marc Zyngier <maz@kernel.org>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Srinivas Ramana <sramana@codeaurora.org>
> > Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch")
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/kernel/process.c | 7 +++++++
> >  1 file changed, 7 insertions(+)
> > 
> > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> > index d54586d5b031..45e867f40a7a 100644
> > --- a/arch/arm64/kernel/process.c
> > +++ b/arch/arm64/kernel/process.c
> > @@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct task_struct
> > *next)
> >  	if (unlikely(next->flags & PF_KTHREAD))
> >  		return;
> > 
> > +	/*
> > +	 * If all CPUs implement the SSBS instructions, then we just
> > +	 * need to context-switch the PSTATE field.
> > +	 */
> > +	if (cpu_have_feature(cpu_feature(SSBS)))
> > +		return;
> > +
> >  	/* If the mitigation is enabled, then we leave SSBS clear. */
> >  	if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
> >  	    test_tsk_thread_flag(next, TIF_SSBD))
> 
> Looks goot to me.

Ja!

> Reviewed-by: Marc Zyngier <maz@kernel.org>

Cheers. It occurs to me that, although the patch is correct, the comment and
the commit message need tweaking because we're actually predicating this on
the presence of SSBS in any form, so the instructions may not be
implemented. That's fine because the prctl() updates pstate, so it remains
authoritative and can't be lost by one of the CPUs treating it as RAZ/WI.

I'll spin a v2 later on.

Will

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] arm64: ssbs: Fix context-switch when SSBS instructions are present
@ 2020-02-06 12:20     ` Will Deacon
  0 siblings, 0 replies; 8+ messages in thread
From: Will Deacon @ 2020-02-06 12:20 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: mark.rutland, Srinivas Ramana, Catalin Marinas, stable,
	kernel-team, linux-arm-kernel

On Thu, Feb 06, 2020 at 11:49:31AM +0000, Marc Zyngier wrote:
> On 2020-02-06 11:34, Will Deacon wrote:
> > When all CPUs in the system implement the SSBS instructions, we
> > advertise this via an HWCAP and allow EL0 to toggle the SSBS field
> > in PSTATE directly. Consequently, the state of the mitigation is not
> > accurately tracked by the TIF_SSBD thread flag and the PSTATE value
> > is authoritative.
> > 
> > Avoid forcing the SSBS field in context-switch on such a system, and
> > simply rely on the PSTATE register instead.
> > 
> > Cc: <stable@vger.kernel.org>
> > Cc: Marc Zyngier <maz@kernel.org>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Srinivas Ramana <sramana@codeaurora.org>
> > Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch")
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/kernel/process.c | 7 +++++++
> >  1 file changed, 7 insertions(+)
> > 
> > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> > index d54586d5b031..45e867f40a7a 100644
> > --- a/arch/arm64/kernel/process.c
> > +++ b/arch/arm64/kernel/process.c
> > @@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct task_struct
> > *next)
> >  	if (unlikely(next->flags & PF_KTHREAD))
> >  		return;
> > 
> > +	/*
> > +	 * If all CPUs implement the SSBS instructions, then we just
> > +	 * need to context-switch the PSTATE field.
> > +	 */
> > +	if (cpu_have_feature(cpu_feature(SSBS)))
> > +		return;
> > +
> >  	/* If the mitigation is enabled, then we leave SSBS clear. */
> >  	if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
> >  	    test_tsk_thread_flag(next, TIF_SSBD))
> 
> Looks goot to me.

Ja!

> Reviewed-by: Marc Zyngier <maz@kernel.org>

Cheers. It occurs to me that, although the patch is correct, the comment and
the commit message need tweaking because we're actually predicating this on
the presence of SSBS in any form, so the instructions may not be
implemented. That's fine because the prctl() updates pstate, so it remains
authoritative and can't be lost by one of the CPUs treating it as RAZ/WI.

I'll spin a v2 later on.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] arm64: ssbs: Fix context-switch when SSBS instructions are present
  2020-02-06 12:20     ` Will Deacon
@ 2020-02-06 12:41       ` Marc Zyngier
  -1 siblings, 0 replies; 8+ messages in thread
From: Marc Zyngier @ 2020-02-06 12:41 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, mark.rutland, kernel-team, stable,
	Catalin Marinas, Srinivas Ramana

On 2020-02-06 12:20, Will Deacon wrote:
> On Thu, Feb 06, 2020 at 11:49:31AM +0000, Marc Zyngier wrote:
>> On 2020-02-06 11:34, Will Deacon wrote:
>> > When all CPUs in the system implement the SSBS instructions, we
>> > advertise this via an HWCAP and allow EL0 to toggle the SSBS field
>> > in PSTATE directly. Consequently, the state of the mitigation is not
>> > accurately tracked by the TIF_SSBD thread flag and the PSTATE value
>> > is authoritative.
>> >
>> > Avoid forcing the SSBS field in context-switch on such a system, and
>> > simply rely on the PSTATE register instead.
>> >
>> > Cc: <stable@vger.kernel.org>
>> > Cc: Marc Zyngier <maz@kernel.org>
>> > Cc: Catalin Marinas <catalin.marinas@arm.com>
>> > Cc: Srinivas Ramana <sramana@codeaurora.org>
>> > Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch")
>> > Signed-off-by: Will Deacon <will@kernel.org>
>> > ---
>> >  arch/arm64/kernel/process.c | 7 +++++++
>> >  1 file changed, 7 insertions(+)
>> >
>> > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
>> > index d54586d5b031..45e867f40a7a 100644
>> > --- a/arch/arm64/kernel/process.c
>> > +++ b/arch/arm64/kernel/process.c
>> > @@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct task_struct
>> > *next)
>> >  	if (unlikely(next->flags & PF_KTHREAD))
>> >  		return;
>> >
>> > +	/*
>> > +	 * If all CPUs implement the SSBS instructions, then we just
>> > +	 * need to context-switch the PSTATE field.
>> > +	 */
>> > +	if (cpu_have_feature(cpu_feature(SSBS)))
>> > +		return;
>> > +
>> >  	/* If the mitigation is enabled, then we leave SSBS clear. */
>> >  	if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
>> >  	    test_tsk_thread_flag(next, TIF_SSBD))
>> 
>> Looks goot to me.
> 
> Ja!

Ach...

> 
>> Reviewed-by: Marc Zyngier <maz@kernel.org>
> 
> Cheers. It occurs to me that, although the patch is correct, the 
> comment and
> the commit message need tweaking because we're actually predicating 
> this on
> the presence of SSBS in any form, so the instructions may not be
> implemented. That's fine because the prctl() updates pstate, so it 
> remains
> authoritative and can't be lost by one of the CPUs treating it as 
> RAZ/WI.

True. It is the PSTATE bit that actually matters, not the presence of 
the control
instruction.

> I'll spin a v2 later on.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] arm64: ssbs: Fix context-switch when SSBS instructions are present
@ 2020-02-06 12:41       ` Marc Zyngier
  0 siblings, 0 replies; 8+ messages in thread
From: Marc Zyngier @ 2020-02-06 12:41 UTC (permalink / raw)
  To: Will Deacon
  Cc: mark.rutland, Srinivas Ramana, Catalin Marinas, stable,
	kernel-team, linux-arm-kernel

On 2020-02-06 12:20, Will Deacon wrote:
> On Thu, Feb 06, 2020 at 11:49:31AM +0000, Marc Zyngier wrote:
>> On 2020-02-06 11:34, Will Deacon wrote:
>> > When all CPUs in the system implement the SSBS instructions, we
>> > advertise this via an HWCAP and allow EL0 to toggle the SSBS field
>> > in PSTATE directly. Consequently, the state of the mitigation is not
>> > accurately tracked by the TIF_SSBD thread flag and the PSTATE value
>> > is authoritative.
>> >
>> > Avoid forcing the SSBS field in context-switch on such a system, and
>> > simply rely on the PSTATE register instead.
>> >
>> > Cc: <stable@vger.kernel.org>
>> > Cc: Marc Zyngier <maz@kernel.org>
>> > Cc: Catalin Marinas <catalin.marinas@arm.com>
>> > Cc: Srinivas Ramana <sramana@codeaurora.org>
>> > Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch")
>> > Signed-off-by: Will Deacon <will@kernel.org>
>> > ---
>> >  arch/arm64/kernel/process.c | 7 +++++++
>> >  1 file changed, 7 insertions(+)
>> >
>> > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
>> > index d54586d5b031..45e867f40a7a 100644
>> > --- a/arch/arm64/kernel/process.c
>> > +++ b/arch/arm64/kernel/process.c
>> > @@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct task_struct
>> > *next)
>> >  	if (unlikely(next->flags & PF_KTHREAD))
>> >  		return;
>> >
>> > +	/*
>> > +	 * If all CPUs implement the SSBS instructions, then we just
>> > +	 * need to context-switch the PSTATE field.
>> > +	 */
>> > +	if (cpu_have_feature(cpu_feature(SSBS)))
>> > +		return;
>> > +
>> >  	/* If the mitigation is enabled, then we leave SSBS clear. */
>> >  	if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
>> >  	    test_tsk_thread_flag(next, TIF_SSBD))
>> 
>> Looks goot to me.
> 
> Ja!

Ach...

> 
>> Reviewed-by: Marc Zyngier <maz@kernel.org>
> 
> Cheers. It occurs to me that, although the patch is correct, the 
> comment and
> the commit message need tweaking because we're actually predicating 
> this on
> the presence of SSBS in any form, so the instructions may not be
> implemented. That's fine because the prctl() updates pstate, so it 
> remains
> authoritative and can't be lost by one of the CPUs treating it as 
> RAZ/WI.

True. It is the PSTATE bit that actually matters, not the presence of 
the control
instruction.

> I'll spin a v2 later on.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-02-06 12:41 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-06 11:34 [PATCH] arm64: ssbs: Fix context-switch when SSBS instructions are present Will Deacon
2020-02-06 11:34 ` Will Deacon
2020-02-06 11:49 ` Marc Zyngier
2020-02-06 11:49   ` Marc Zyngier
2020-02-06 12:20   ` Will Deacon
2020-02-06 12:20     ` Will Deacon
2020-02-06 12:41     ` Marc Zyngier
2020-02-06 12:41       ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.