All of lore.kernel.org
 help / color / mirror / Atom feed
* [v3 0/1] Introduce support for PSF control.
@ 2021-04-28 16:03 Ramakrishna Saripalli
  2021-04-28 16:03 ` [v3 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control Ramakrishna Saripalli
  0 siblings, 1 reply; 9+ messages in thread
From: Ramakrishna Saripalli @ 2021-04-28 16:03 UTC (permalink / raw)
  To: linux-kernel, x86, tglx, mingo, bp, hpa; +Cc: bsd, rsaripal

From: Ramakrishna Saripalli <rk.saripalli@amd.com>

Predictive Store Forwarding:
AMD Zen3 processors feature a new technology called
Predictive Store Forwarding (PSF).

https://www.amd.com/system/files/documents/security-analysis-predictive-store-forwarding.pdf

PSF is a hardware-based micro-architectural optimization designed
to improve the performance of code execution by predicting address
dependencies between loads and stores.

How PSF works:

It is very common for a CPU to execute a load instruction to an address
that was recently written by a store. Modern CPUs implement a technique
known as Store-To-Load-Forwarding (STLF) to improve performance in such
cases. With STLF, data from the store is forwarded directly to the load
without having to wait for it to be written to memory. In a typical CPU,
STLF occurs after the address of both the load and store are calculated
and determined to match.

PSF expands on this by speculating on the relationship between loads and
stores without waiting for the address calculation to complete. With PSF,
the CPU learns over time the relationship between loads and stores.
If STLF typically occurs between a particular store and load, the CPU will
remember this.

In typical code, PSF provides a performance benefit by speculating on
the load result and allowing later instructions to begin execution
sooner than they otherwise would be able to.

Causes of Incorrect PSF:

Incorrect PSF predictions can occur due to two reasons.

First, it is possible that the store/load pair had a dependency for a
while but later stops having a dependency.  This can occur if the address
of either the store or load changes during the execution of the program.

The second source of incorrect PSF predictions can occur if there is an
alias in the PSF predictor structure.  The PSF predictor tracks
store-load pairs based on portions of their RIP. It is possible that a
store-load pair which does have a dependency may alias in the predictor
with another store-load pair which does not.

This can result in incorrect speculation when the second store/load pair
is executed.

Security Analysis:

Previous research has shown that when CPUs speculate on non-architectural
paths it can lead to the potential of side channel attacks.
In particular, programs that implement isolation, also known as
‘sandboxing’, entirely in software may need to be concerned with incorrect
CPU speculation as they can occur due to bad PSF predictions.

Because PSF speculation is limited to the current program context,
the impact of bad PSF speculation is very similar to that of
Speculative Store Bypass (Spectre v4)

Predictive Store Forwarding controls:
There are two hardware control bits which influence the PSF feature:
- MSR 48h bit 2 – Speculative Store Bypass (SSBD)
- MSR 48h bit 7 – Predictive Store Forwarding Disable (PSFD)

The PSF feature is disabled if either of these bits are set.  These bits
are controllable on a per-thread basis in an SMT system. By default, both
SSBD and PSFD are 0 meaning that the speculation features are enabled.

While the SSBD bit disables PSF and speculative store bypass, PSFD only
disables PSF.

PSFD may be desirable for software which is concerned with the
speculative behavior of PSF but desires a smaller performance impact than
setting SSBD.

Support for PSFD is indicated in CPUID Fn8000_0008 EBX[28].
All processors that support PSF will also support PSFD.

Testing notes:
- Tested on Milan processor with the following kernel parameters
    - [spec_control_bypass_disable = {off,on}] with 
       [predict_spec_fwd={off,on}]
      and verified SPEC_CTRL_MSR value (0x48) on all CPUs is the same
      and reflects the kernel parameters

      Modified kernel to not set STIBP unconditionally (Bit 1) and 
      verified the SPEC_CTRL_MSR with the same kernel parameters

- Tested on Rome systems (PSF feature is not supported)
    Verified SPEC_CTRL_MSR MSR value on all logical CPUs.

Ramakrishna Saripalli (1):
  x86/cpufeatures: Implement Predictive Store Forwarding control.

 .../admin-guide/kernel-parameters.txt         |  5 +++++
 arch/x86/include/asm/cpufeatures.h            |  1 +
 arch/x86/include/asm/msr-index.h              |  2 ++
 arch/x86/kernel/cpu/amd.c                     | 20 +++++++++++++++++++
 4 files changed, 28 insertions(+)


base-commit: eb4fae8d3b9ed65099eea96665aa4a11e4862ac4
-- 
2.25.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [v3 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-28 16:03 [v3 0/1] Introduce support for PSF control Ramakrishna Saripalli
@ 2021-04-28 16:03 ` Ramakrishna Saripalli
  2021-04-29  5:13   ` Tom Lendacky
  2021-04-29  5:38   ` Reiji Watanabe
  0 siblings, 2 replies; 9+ messages in thread
From: Ramakrishna Saripalli @ 2021-04-28 16:03 UTC (permalink / raw)
  To: linux-kernel, x86, tglx, mingo, bp, hpa, Jonathan Corbet; +Cc: bsd, rsaripal

From: Ramakrishna Saripalli <rk.saripalli@amd.com>

Certain AMD processors feature a new technology called Predictive Store
Forwarding (PSF).

PSF is a micro-architectural optimization designed to improve the
performance of code execution by predicting dependencies between
loads and stores.

Incorrect PSF predictions can occur due to two reasons.

- It is possible that the load/store pair may have had dependency for
  a while but the dependency has stopped because the address in the
  load/store pair has changed.

- Second source of incorrect PSF prediction can occur because of an alias
  in the PSF predictor structure stored in the microarchitectural state.
  PSF predictor tracks load/store pair based on portions of instruction
  pointer. It is possible that a load/store pair which does have a
  dependency may be aliased by another load/store pair which does not have
  the same dependency. This can result in incorrect speculation.

  Software may be able to detect this aliasing and perform side-channel
  attacks.

All CPUs that implement PSF provide one bit to disable this feature.
If the bit to disable this feature is available, it means that the CPU
implements PSF feature and is therefore vulnerable to PSF risks.

The bits that are introduced

X86_FEATURE_PSFD: CPUID_Fn80000008_EBX[28] ("PSF disable")
	If this bit is 1, CPU implements PSF and PSF control
	via SPEC_CTRL_MSR is supported in the CPU.

All AMD processors that support PSF implement a bit in
SPEC_CTRL MSR (0x48) to disable or enable Predictive Store
Forwarding.

PSF control introduces a new kernel parameter called
	predict_store_fwd.

Kernel parameter predict_store_fwd has the following values

- off. This value is used to disable PSF on all CPUs.

- on. This value is used to enable PSF on all CPUs.
        This is also the default setting.
---
ChangeLogs:
    V3->V2:
          Set the X86_FEATURE_SPEC_CTRL_MSR cap in boot cpu caps.
          Fix kernel documentation for the kernel parameter.
          Rename PSF to a control instead of mitigation.

    V1->V2:
	- Smashed multiple commits into one commit.
	- Rename PSF to a control instead of mitigation.

    V1:
	- Initial patchset.
	- Kernel parameter controls enable and disable of PSF.
====================
Signed-off-by: Ramakrishna Saripalli<rk.saripalli@amd.com>
---
 .../admin-guide/kernel-parameters.txt         |  5 +++++
 arch/x86/include/asm/cpufeatures.h            |  1 +
 arch/x86/include/asm/msr-index.h              |  2 ++
 arch/x86/kernel/cpu/amd.c                     | 20 +++++++++++++++++++
 4 files changed, 28 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index de27d5a4d994..0576e8a8d033 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3950,6 +3950,11 @@
 			Format: {"off"}
 			Disable Hardware Transactional Memory
 
+	predict_store_fwd=	[X86] This option controls PSF.
+			off - Turns off PSF.
+			on  - Turns on PSF.
+			default : on.
+
 	preempt=	[KNL]
 			Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
 			none - Limited to cond_resched() calls
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 3c94316169a3..e36e6bf2f18b 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -313,6 +313,7 @@
 #define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
 #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
 #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
+#define X86_FEATURE_PSFD		(13*32+28) /* Predictive Store Forward Disable */
 
 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
 #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 742d89a00721..21f0c3fc1b2c 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -51,6 +51,8 @@
 #define SPEC_CTRL_STIBP			BIT(SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
 #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
 #define SPEC_CTRL_SSBD			BIT(SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
+#define SPEC_CTRL_PSFD_SHIFT		7
+#define SPEC_CTRL_PSFD			BIT(SPEC_CTRL_PSFD_SHIFT)	/* Predictive Store Forwarding Disable */
 
 #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
 #define PRED_CMD_IBPB			BIT(0)	   /* Indirect Branch Prediction Barrier */
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 2d11384dc9ab..c9b6ba3ea431 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -1165,3 +1165,23 @@ void set_dr_addr_mask(unsigned long mask, int dr)
 		break;
 	}
 }
+
+static int __init psf_cmdline(char *str)
+{
+	if (!boot_cpu_has(X86_FEATURE_PSFD))
+		return 0;
+
+	if (!str)
+		return -EINVAL;
+
+	if (!strcmp(str, "off")) {
+		set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
+		x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
+		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+		setup_clear_cpu_cap(X86_FEATURE_PSFD);
+	}
+
+	return 0;
+}
+
+early_param("predict_store_fwd", psf_cmdline);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [v3 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-28 16:03 ` [v3 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control Ramakrishna Saripalli
@ 2021-04-29  5:13   ` Tom Lendacky
  2021-04-29 14:03     ` Saripalli, RK
  2021-04-29  5:38   ` Reiji Watanabe
  1 sibling, 1 reply; 9+ messages in thread
From: Tom Lendacky @ 2021-04-29  5:13 UTC (permalink / raw)
  To: Ramakrishna Saripalli, linux-kernel, x86, tglx, mingo, bp, hpa,
	Jonathan Corbet
  Cc: bsd

On 4/28/21 11:03 AM, Ramakrishna Saripalli wrote:
> From: Ramakrishna Saripalli <rk.saripalli@amd.com>
> 
> Certain AMD processors feature a new technology called Predictive Store
> Forwarding (PSF).
> 
> PSF is a micro-architectural optimization designed to improve the
> performance of code execution by predicting dependencies between
> loads and stores.
> 
> Incorrect PSF predictions can occur due to two reasons.
> 
> - It is possible that the load/store pair may have had dependency for
>   a while but the dependency has stopped because the address in the
>   load/store pair has changed.
> 
> - Second source of incorrect PSF prediction can occur because of an alias
>   in the PSF predictor structure stored in the microarchitectural state.
>   PSF predictor tracks load/store pair based on portions of instruction
>   pointer. It is possible that a load/store pair which does have a
>   dependency may be aliased by another load/store pair which does not have
>   the same dependency. This can result in incorrect speculation.
> 
>   Software may be able to detect this aliasing and perform side-channel
>   attacks.
> 
> All CPUs that implement PSF provide one bit to disable this feature.
> If the bit to disable this feature is available, it means that the CPU
> implements PSF feature and is therefore vulnerable to PSF risks.
> 
> The bits that are introduced
> 
> X86_FEATURE_PSFD: CPUID_Fn80000008_EBX[28] ("PSF disable")
> 	If this bit is 1, CPU implements PSF and PSF control
> 	via SPEC_CTRL_MSR is supported in the CPU.
> 
> All AMD processors that support PSF implement a bit in
> SPEC_CTRL MSR (0x48) to disable or enable Predictive Store
> Forwarding.
> 
> PSF control introduces a new kernel parameter called
> 	predict_store_fwd.
> 
> Kernel parameter predict_store_fwd has the following values
> 
> - off. This value is used to disable PSF on all CPUs.
> 
> - on. This value is used to enable PSF on all CPUs.
>         This is also the default setting.
> ---
> ChangeLogs:
>     V3->V2:
>           Set the X86_FEATURE_SPEC_CTRL_MSR cap in boot cpu caps.
>           Fix kernel documentation for the kernel parameter.
>           Rename PSF to a control instead of mitigation.
> 
>     V1->V2:
> 	- Smashed multiple commits into one commit.
> 	- Rename PSF to a control instead of mitigation.
> 
>     V1:
> 	- Initial patchset.
> 	- Kernel parameter controls enable and disable of PSF.
> ====================
> Signed-off-by: Ramakrishna Saripalli<rk.saripalli@amd.com>
> ---
>  .../admin-guide/kernel-parameters.txt         |  5 +++++
>  arch/x86/include/asm/cpufeatures.h            |  1 +
>  arch/x86/include/asm/msr-index.h              |  2 ++
>  arch/x86/kernel/cpu/amd.c                     | 20 +++++++++++++++++++
>  4 files changed, 28 insertions(+)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index de27d5a4d994..0576e8a8d033 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -3950,6 +3950,11 @@
>  			Format: {"off"}
>  			Disable Hardware Transactional Memory
>  
> +	predict_store_fwd=	[X86] This option controls PSF.
> +			off - Turns off PSF.
> +			on  - Turns on PSF.
> +			default : on.
> +
>  	preempt=	[KNL]
>  			Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
>  			none - Limited to cond_resched() calls
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index 3c94316169a3..e36e6bf2f18b 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -313,6 +313,7 @@
>  #define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
>  #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
>  #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
> +#define X86_FEATURE_PSFD		(13*32+28) /* Predictive Store Forward Disable */
>  
>  /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
>  #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
> index 742d89a00721..21f0c3fc1b2c 100644
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -51,6 +51,8 @@
>  #define SPEC_CTRL_STIBP			BIT(SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
>  #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
>  #define SPEC_CTRL_SSBD			BIT(SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
> +#define SPEC_CTRL_PSFD_SHIFT		7
> +#define SPEC_CTRL_PSFD			BIT(SPEC_CTRL_PSFD_SHIFT)	/* Predictive Store Forwarding Disable */
>  
>  #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
>  #define PRED_CMD_IBPB			BIT(0)	   /* Indirect Branch Prediction Barrier */
> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> index 2d11384dc9ab..c9b6ba3ea431 100644
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -1165,3 +1165,23 @@ void set_dr_addr_mask(unsigned long mask, int dr)
>  		break;
>  	}
>  }
> +
> +static int __init psf_cmdline(char *str)
> +{
> +	if (!boot_cpu_has(X86_FEATURE_PSFD))
> +		return 0;
> +
> +	if (!str)
> +		return -EINVAL;
> +
> +	if (!strcmp(str, "off")) {
> +		set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
> +		x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
> +		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> +		setup_clear_cpu_cap(X86_FEATURE_PSFD);

Why are you clearing the feature here? Won't this be needed for
virtualization support?

Thanks,
Tom

> +	}
> +
> +	return 0;
> +}
> +
> +early_param("predict_store_fwd", psf_cmdline);
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [v3 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-28 16:03 ` [v3 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control Ramakrishna Saripalli
  2021-04-29  5:13   ` Tom Lendacky
@ 2021-04-29  5:38   ` Reiji Watanabe
  2021-04-29 14:01     ` Saripalli, RK
  1 sibling, 1 reply; 9+ messages in thread
From: Reiji Watanabe @ 2021-04-29  5:38 UTC (permalink / raw)
  To: Ramakrishna Saripalli
  Cc: linux-kernel, x86, tglx, mingo, bp, hpa, Jonathan Corbet, bsd

> +       if (!strcmp(str, "off")) {
> +               set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
> +               x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
> +               wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);

My previous suggestion about updating MSR_IA32_SPEC_CTRL meant
something like:

    rdmsrl(MSR_IA32_SPEC_CTRL, current);
    wrmsrl(MSR_IA32_SPEC_CTRL, current | SPEC_CTRL_PSFD);

And this is to keep the behavior of code below check_bugs().
(Or do you intentionally change it due to some reason ?)
BTW, x86_spec_ctrl_base, which is updated in psf_cmdline(),
would be overwritten by check_bugs() anyway as follows.
---
void __init check_bugs(void)
{
        <...>
        /*
         * Read the SPEC_CTRL MSR to account for reserved bits which may
         * have unknown values. AMD64_LS_CFG MSR is cached in the early AMD
         * init code as it is not enumerated and depends on the family.
         */
        if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
                rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
        <...>
---

> +               setup_clear_cpu_cap(X86_FEATURE_PSFD);

Does X86_FEATURE_PSFD need to be cleared for the 'off' case ?
Do you want to remove "psfd" from /proc/cpuinfo
when PSFD is enabled ? (not when PSFD is disabled ?)


Thanks,
Reiji

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [v3 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-29  5:38   ` Reiji Watanabe
@ 2021-04-29 14:01     ` Saripalli, RK
  2021-04-29 14:25       ` Tom Lendacky
  0 siblings, 1 reply; 9+ messages in thread
From: Saripalli, RK @ 2021-04-29 14:01 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: linux-kernel, x86, tglx, mingo, bp, hpa, Jonathan Corbet, bsd



On 4/29/2021 12:38 AM, Reiji Watanabe wrote:
>> +       if (!strcmp(str, "off")) {
>> +               set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
>> +               x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
>> +               wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> 
> My previous suggestion about updating MSR_IA32_SPEC_CTRL meant
> something like:
> 
>     rdmsrl(MSR_IA32_SPEC_CTRL, current);
>     wrmsrl(MSR_IA32_SPEC_CTRL, current | SPEC_CTRL_PSFD);
> 
> And this is to keep the behavior of code below check_bugs().
> (Or do you intentionally change it due to some reason ?)
> BTW, x86_spec_ctrl_base, which is updated in psf_cmdline(),
> would be overwritten by check_bugs() anyway as follows.
> ---

Since psf_cmdline() directly writes to the MSR itself (and it only does this)
if the feature is available (per CPUID), check_bugs() should be ok.

My patch for now does not depend on the value of x86_spec_ctrl_base after psf_cmdline()
finishes execution.

> void __init check_bugs(void)
> {
>         <...>
>         /*
>          * Read the SPEC_CTRL MSR to account for reserved bits which may
>          * have unknown values. AMD64_LS_CFG MSR is cached in the early AMD
>          * init code as it is not enumerated and depends on the family.
>          */
>         if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
>                 rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>         <...>
> ---
> 
>> +               setup_clear_cpu_cap(X86_FEATURE_PSFD);
> 
> Does X86_FEATURE_PSFD need to be cleared for the 'off' case ?
> Do you want to remove "psfd" from /proc/cpuinfo
> when PSFD is enabled ? (not when PSFD is disabled ?)
> 
> 
No, it should not be cleared, I agree.
But I did test with KVM (with my patch that is not here) and I do not see
issues (meaning user space guest in QEMU is seeing PSF CPUID guest capability)

I see no reason to clear this feature and I will submit a new patch with this and other changes.

> Thanks,
> Reiji
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [v3 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-29  5:13   ` Tom Lendacky
@ 2021-04-29 14:03     ` Saripalli, RK
  0 siblings, 0 replies; 9+ messages in thread
From: Saripalli, RK @ 2021-04-29 14:03 UTC (permalink / raw)
  To: Tom Lendacky, linux-kernel, x86, tglx, mingo, bp, hpa, Jonathan Corbet
  Cc: bsd



On 4/29/2021 12:13 AM, Tom Lendacky wrote:
> On 4/28/21 11:03 AM, Ramakrishna Saripalli wrote:
>> From: Ramakrishna Saripalli <rk.saripalli@amd.com>
>>
>> Certain AMD processors feature a new technology called Predictive Store
>> Forwarding (PSF).
>>
>> PSF is a micro-architectural optimization designed to improve the
>> performance of code execution by predicting dependencies between
>> loads and stores.
>>
>> Incorrect PSF predictions can occur due to two reasons.
>>
>> - It is possible that the load/store pair may have had dependency for
>>   a while but the dependency has stopped because the address in the
>>   load/store pair has changed.
>>
>> - Second source of incorrect PSF prediction can occur because of an alias
>>   in the PSF predictor structure stored in the microarchitectural state.
>>   PSF predictor tracks load/store pair based on portions of instruction
>>   pointer. It is possible that a load/store pair which does have a
>>   dependency may be aliased by another load/store pair which does not have
>>   the same dependency. This can result in incorrect speculation.
>>
>>   Software may be able to detect this aliasing and perform side-channel
>>   attacks.
>>
>> All CPUs that implement PSF provide one bit to disable this feature.
>> If the bit to disable this feature is available, it means that the CPU
>> implements PSF feature and is therefore vulnerable to PSF risks.
>>
>> The bits that are introduced
>>
>> X86_FEATURE_PSFD: CPUID_Fn80000008_EBX[28] ("PSF disable")
>> 	If this bit is 1, CPU implements PSF and PSF control
>> 	via SPEC_CTRL_MSR is supported in the CPU.
>>
>> All AMD processors that support PSF implement a bit in
>> SPEC_CTRL MSR (0x48) to disable or enable Predictive Store
>> Forwarding.
>>
>> PSF control introduces a new kernel parameter called
>> 	predict_store_fwd.
>>
>> Kernel parameter predict_store_fwd has the following values
>>
>> - off. This value is used to disable PSF on all CPUs.
>>
>> - on. This value is used to enable PSF on all CPUs.
>>         This is also the default setting.
>> ---
>> ChangeLogs:
>>     V3->V2:
>>           Set the X86_FEATURE_SPEC_CTRL_MSR cap in boot cpu caps.
>>           Fix kernel documentation for the kernel parameter.
>>           Rename PSF to a control instead of mitigation.
>>
>>     V1->V2:
>> 	- Smashed multiple commits into one commit.
>> 	- Rename PSF to a control instead of mitigation.
>>
>>     V1:
>> 	- Initial patchset.
>> 	- Kernel parameter controls enable and disable of PSF.
>> ====================
>> Signed-off-by: Ramakrishna Saripalli<rk.saripalli@amd.com>
>> ---
>>  .../admin-guide/kernel-parameters.txt         |  5 +++++
>>  arch/x86/include/asm/cpufeatures.h            |  1 +
>>  arch/x86/include/asm/msr-index.h              |  2 ++
>>  arch/x86/kernel/cpu/amd.c                     | 20 +++++++++++++++++++
>>  4 files changed, 28 insertions(+)
>>
>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>> index de27d5a4d994..0576e8a8d033 100644
>> --- a/Documentation/admin-guide/kernel-parameters.txt
>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>> @@ -3950,6 +3950,11 @@
>>  			Format: {"off"}
>>  			Disable Hardware Transactional Memory
>>  
>> +	predict_store_fwd=	[X86] This option controls PSF.
>> +			off - Turns off PSF.
>> +			on  - Turns on PSF.
>> +			default : on.
>> +
>>  	preempt=	[KNL]
>>  			Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
>>  			none - Limited to cond_resched() calls
>> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
>> index 3c94316169a3..e36e6bf2f18b 100644
>> --- a/arch/x86/include/asm/cpufeatures.h
>> +++ b/arch/x86/include/asm/cpufeatures.h
>> @@ -313,6 +313,7 @@
>>  #define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
>>  #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
>>  #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
>> +#define X86_FEATURE_PSFD		(13*32+28) /* Predictive Store Forward Disable */
>>  
>>  /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
>>  #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
>> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
>> index 742d89a00721..21f0c3fc1b2c 100644
>> --- a/arch/x86/include/asm/msr-index.h
>> +++ b/arch/x86/include/asm/msr-index.h
>> @@ -51,6 +51,8 @@
>>  #define SPEC_CTRL_STIBP			BIT(SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
>>  #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
>>  #define SPEC_CTRL_SSBD			BIT(SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
>> +#define SPEC_CTRL_PSFD_SHIFT		7
>> +#define SPEC_CTRL_PSFD			BIT(SPEC_CTRL_PSFD_SHIFT)	/* Predictive Store Forwarding Disable */
>>  
>>  #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
>>  #define PRED_CMD_IBPB			BIT(0)	   /* Indirect Branch Prediction Barrier */
>> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
>> index 2d11384dc9ab..c9b6ba3ea431 100644
>> --- a/arch/x86/kernel/cpu/amd.c
>> +++ b/arch/x86/kernel/cpu/amd.c
>> @@ -1165,3 +1165,23 @@ void set_dr_addr_mask(unsigned long mask, int dr)
>>  		break;
>>  	}
>>  }
>> +
>> +static int __init psf_cmdline(char *str)
>> +{
>> +	if (!boot_cpu_has(X86_FEATURE_PSFD))
>> +		return 0;
>> +
>> +	if (!str)
>> +		return -EINVAL;
>> +
>> +	if (!strcmp(str, "off")) {
>> +		set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
>> +		x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
>> +		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>> +		setup_clear_cpu_cap(X86_FEATURE_PSFD);
> 
> Why are you clearing the feature here? Won't this be needed for
> virtualization support?

Yes this feature is needed for KVM/virtualization support.
This feature should not be cleared.

> 
> Thanks,
> Tom
> 
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +early_param("predict_store_fwd", psf_cmdline);
>>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [v3 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-29 14:01     ` Saripalli, RK
@ 2021-04-29 14:25       ` Tom Lendacky
  2021-04-29 14:32         ` Saripalli, RK
  0 siblings, 1 reply; 9+ messages in thread
From: Tom Lendacky @ 2021-04-29 14:25 UTC (permalink / raw)
  To: Saripalli, RK, Reiji Watanabe
  Cc: linux-kernel, x86, tglx, mingo, bp, hpa, Jonathan Corbet, bsd

On 4/29/21 9:01 AM, Saripalli, RK wrote:
> 
> 
> On 4/29/2021 12:38 AM, Reiji Watanabe wrote:
>>> +       if (!strcmp(str, "off")) {
>>> +               set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
>>> +               x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
>>> +               wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>>
>> My previous suggestion about updating MSR_IA32_SPEC_CTRL meant
>> something like:
>>
>>     rdmsrl(MSR_IA32_SPEC_CTRL, current);
>>     wrmsrl(MSR_IA32_SPEC_CTRL, current | SPEC_CTRL_PSFD);
>>
>> And this is to keep the behavior of code below check_bugs().
>> (Or do you intentionally change it due to some reason ?)
>> BTW, x86_spec_ctrl_base, which is updated in psf_cmdline(),
>> would be overwritten by check_bugs() anyway as follows.
>> ---
> 
> Since psf_cmdline() directly writes to the MSR itself (and it only does this)
> if the feature is available (per CPUID), check_bugs() should be ok.
> 
> My patch for now does not depend on the value of x86_spec_ctrl_base after psf_cmdline()
> finishes execution.

Reiji is correct. What if BIOS has set some other bits in SPEC_CTRL (now
or in the future) as part of setup. You will effectively be zeroing them
out. The correct method is as he has documented, by reading the MSR,
or'ing in the PSFD bit and writing the MSR.

Thanks,
Tom

> 
>> void __init check_bugs(void)
>> {
>>         <...>
>>         /*
>>          * Read the SPEC_CTRL MSR to account for reserved bits which may
>>          * have unknown values. AMD64_LS_CFG MSR is cached in the early AMD
>>          * init code as it is not enumerated and depends on the family.
>>          */
>>         if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
>>                 rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>>         <...>
>> ---
>>
>>> +               setup_clear_cpu_cap(X86_FEATURE_PSFD);
>>
>> Does X86_FEATURE_PSFD need to be cleared for the 'off' case ?
>> Do you want to remove "psfd" from /proc/cpuinfo
>> when PSFD is enabled ? (not when PSFD is disabled ?)
>>
>>
> No, it should not be cleared, I agree.
> But I did test with KVM (with my patch that is not here) and I do not see
> issues (meaning user space guest in QEMU is seeing PSF CPUID guest capability)
> 
> I see no reason to clear this feature and I will submit a new patch with this and other changes.
> 
>> Thanks,
>> Reiji
>>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [v3 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-29 14:25       ` Tom Lendacky
@ 2021-04-29 14:32         ` Saripalli, RK
  2021-04-29 15:02           ` Borislav Petkov
  0 siblings, 1 reply; 9+ messages in thread
From: Saripalli, RK @ 2021-04-29 14:32 UTC (permalink / raw)
  To: Tom Lendacky, Reiji Watanabe
  Cc: linux-kernel, x86, tglx, mingo, bp, hpa, Jonathan Corbet, bsd



On 4/29/2021 9:25 AM, Tom Lendacky wrote:
> On 4/29/21 9:01 AM, Saripalli, RK wrote:
>>
>>
>> On 4/29/2021 12:38 AM, Reiji Watanabe wrote:
>>>> +       if (!strcmp(str, "off")) {
>>>> +               set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
>>>> +               x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
>>>> +               wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>>>
>>> My previous suggestion about updating MSR_IA32_SPEC_CTRL meant
>>> something like:
>>>
>>>     rdmsrl(MSR_IA32_SPEC_CTRL, current);
>>>     wrmsrl(MSR_IA32_SPEC_CTRL, current | SPEC_CTRL_PSFD);
>>>
>>> And this is to keep the behavior of code below check_bugs().
>>> (Or do you intentionally change it due to some reason ?)
>>> BTW, x86_spec_ctrl_base, which is updated in psf_cmdline(),
>>> would be overwritten by check_bugs() anyway as follows.
>>> ---
>>
>> Since psf_cmdline() directly writes to the MSR itself (and it only does this)
>> if the feature is available (per CPUID), check_bugs() should be ok.
>>
>> My patch for now does not depend on the value of x86_spec_ctrl_base after psf_cmdline()
>> finishes execution.
> 
> Reiji is correct. What if BIOS has set some other bits in SPEC_CTRL (now
> or in the future) as part of setup. You will effectively be zeroing them
> out. The correct method is as he has documented, by reading the MSR,
> or'ing in the PSFD bit and writing the MSR.

Yes, I agree with his analysis and fixing it.
> 
> Thanks,
> Tom
> 
>>
>>> void __init check_bugs(void)
>>> {
>>>         <...>
>>>         /*
>>>          * Read the SPEC_CTRL MSR to account for reserved bits which may
>>>          * have unknown values. AMD64_LS_CFG MSR is cached in the early AMD
>>>          * init code as it is not enumerated and depends on the family.
>>>          */
>>>         if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
>>>                 rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>>>         <...>
>>> ---
>>>
>>>> +               setup_clear_cpu_cap(X86_FEATURE_PSFD);
>>>
>>> Does X86_FEATURE_PSFD need to be cleared for the 'off' case ?
>>> Do you want to remove "psfd" from /proc/cpuinfo
>>> when PSFD is enabled ? (not when PSFD is disabled ?)
>>>
>>>
>> No, it should not be cleared, I agree.
>> But I did test with KVM (with my patch that is not here) and I do not see
>> issues (meaning user space guest in QEMU is seeing PSF CPUID guest capability)
>>
>> I see no reason to clear this feature and I will submit a new patch with this and other changes.
>>
>>> Thanks,
>>> Reiji
>>>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [v3 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-29 14:32         ` Saripalli, RK
@ 2021-04-29 15:02           ` Borislav Petkov
  0 siblings, 0 replies; 9+ messages in thread
From: Borislav Petkov @ 2021-04-29 15:02 UTC (permalink / raw)
  To: Saripalli, RK
  Cc: Tom Lendacky, Reiji Watanabe, linux-kernel, x86, tglx, mingo,
	hpa, Jonathan Corbet, bsd

On Thu, Apr 29, 2021 at 09:32:35AM -0500, Saripalli, RK wrote:
> Yes, I agree with his analysis and fixing it.

So you can do this and correct the comment above it to explain why
you're doing the "tmp" thing.

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index d41b70fe4918..536136e0daa3 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -78,6 +78,8 @@ EXPORT_SYMBOL_GPL(mds_idle_clear);
 
 void __init check_bugs(void)
 {
+	u64 tmp = 0;
+
 	identify_boot_cpu();
 
 	/*
@@ -97,7 +99,9 @@ void __init check_bugs(void)
 	 * init code as it is not enumerated and depends on the family.
 	 */
 	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
-		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
+
+	x86_spec_ctrl_base |= tmp;
 
 	/* Allow STIBP in MSR_SPEC_CTRL if supported */
 	if (boot_cpu_has(X86_FEATURE_STIBP))

---

and as Tom correctly suggests, set X86_FEATURE_MSR_SPEC_CTRL in
psf_cmdline() so that the above loading of the base value works.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-04-29 15:02 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-28 16:03 [v3 0/1] Introduce support for PSF control Ramakrishna Saripalli
2021-04-28 16:03 ` [v3 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control Ramakrishna Saripalli
2021-04-29  5:13   ` Tom Lendacky
2021-04-29 14:03     ` Saripalli, RK
2021-04-29  5:38   ` Reiji Watanabe
2021-04-29 14:01     ` Saripalli, RK
2021-04-29 14:25       ` Tom Lendacky
2021-04-29 14:32         ` Saripalli, RK
2021-04-29 15:02           ` Borislav Petkov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.