linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/1] Introduce support for PSF control.
@ 2021-04-30 13:17 Ramakrishna Saripalli
  2021-04-30 13:17 ` [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control Ramakrishna Saripalli
  0 siblings, 1 reply; 16+ messages in thread
From: Ramakrishna Saripalli @ 2021-04-30 13:17 UTC (permalink / raw)
  To: linux-kernel, x86, tglx, mingo, bp, hpa; +Cc: bsd, rsaripal

From: Ramakrishna Saripalli <rk.saripalli@amd.com>

Predictive Store Forwarding:
AMD Zen3 processors feature a new technology called
Predictive Store Forwarding (PSF).

https://www.amd.com/system/files/documents/security-analysis-predictive-store-forwarding.pdf

PSF is a hardware-based micro-architectural optimization designed
to improve the performance of code execution by predicting address
dependencies between loads and stores.

How PSF works:

It is very common for a CPU to execute a load instruction to an address
that was recently written by a store. Modern CPUs implement a technique
known as Store-To-Load-Forwarding (STLF) to improve performance in such
cases. With STLF, data from the store is forwarded directly to the load
without having to wait for it to be written to memory. In a typical CPU,
STLF occurs after the address of both the load and store are calculated
and determined to match.

PSF expands on this by speculating on the relationship between loads and
stores without waiting for the address calculation to complete. With PSF,
the CPU learns over time the relationship between loads and stores.
If STLF typically occurs between a particular store and load, the CPU will
remember this.

In typical code, PSF provides a performance benefit by speculating on
the load result and allowing later instructions to begin execution
sooner than they otherwise would be able to.

Causes of Incorrect PSF:

Incorrect PSF predictions can occur due to two reasons.

First, it is possible that the store/load pair had a dependency for a
while but later stops having a dependency.  This can occur if the address
of either the store or load changes during the execution of the program.

The second source of incorrect PSF predictions can occur if there is an
alias in the PSF predictor structure.  The PSF predictor tracks
store-load pairs based on portions of their RIP. It is possible that a
store-load pair which does have a dependency may alias in the predictor
with another store-load pair which does not.

This can result in incorrect speculation when the second store/load pair
is executed.

Security Analysis:

Previous research has shown that when CPUs speculate on non-architectural
paths it can lead to the potential of side channel attacks.
In particular, programs that implement isolation, also known as
‘sandboxing’, entirely in software may need to be concerned with incorrect
CPU speculation as they can occur due to bad PSF predictions.

Because PSF speculation is limited to the current program context,
the impact of bad PSF speculation is very similar to that of
Speculative Store Bypass (Spectre v4)

Predictive Store Forwarding controls:
There are two hardware control bits which influence the PSF feature:
- MSR 48h bit 2 – Speculative Store Bypass (SSBD)
- MSR 48h bit 7 – Predictive Store Forwarding Disable (PSFD)

The PSF feature is disabled if either of these bits are set.  These bits
are controllable on a per-thread basis in an SMT system. By default, both
SSBD and PSFD are 0 meaning that the speculation features are enabled.

While the SSBD bit disables PSF and speculative store bypass, PSFD only
disables PSF.

PSFD may be desirable for software which is concerned with the
speculative behavior of PSF but desires a smaller performance impact than
setting SSBD.

Support for PSFD is indicated in CPUID Fn8000_0008 EBX[28].
All processors that support PSF will also support PSFD.

Testing notes:
- Tested on Milan processor with the following kernel parameters
    - [spec_control_bypass_disable = {off,on}] with 
       [predict_spec_fwd={off,on}]
      and verified SPEC_CTRL_MSR value (0x48) on all CPUs is the same
      and reflects the kernel parameters

      Modified kernel to not set STIBP unconditionally (Bit 1) and 
      verified the SPEC_CTRL_MSR with the same kernel parameters
      
      Disabled all features that write to MSR_IA32_SPEC_CTRL and toggled
      predict_store_fwd. Verified MSR_IA32_SPEC_CTRL and x86_spec_ctrl_base
      are set properly

- Tested on Rome systems (PSF feature is not supported)
    Verified SPEC_CTRL_MSR MSR value on all logical CPUs.

ChangeLogs:
    V4->V3:
     	  Write to MSR_IA32_SPEC_CTRL properly
     	     Read MSR, modify PSFD bit based on kernel parameter and
     	     write back to MSR.
     	     
	     Changes made in psf_cmdline() and check_bugs().
    V3->V2:
          Set the X86_FEATURE_SPEC_CTRL_MSR cap in boot cpu caps.
          Fix kernel documentation for the kernel parameter.
          Rename PSF to a control instead of mitigation.

    V1->V2:
        - Smashed multiple commits into one commit.
        - Rename PSF to a control instead of mitigation.

    V1:
        - Initial patchset.
        - Kernel parameter controls enable and disable of PSF.

Ramakrishna Saripalli (1):
  x86/cpufeatures: Implement Predictive Store Forwarding control.

 .../admin-guide/kernel-parameters.txt         |  5 ++++
 arch/x86/include/asm/cpufeatures.h            |  1 +
 arch/x86/include/asm/msr-index.h              |  2 ++
 arch/x86/kernel/cpu/amd.c                     | 23 +++++++++++++++++++
 arch/x86/kernel/cpu/bugs.c                    |  6 ++++-
 5 files changed, 36 insertions(+), 1 deletion(-)


base-commit: 0e16f466004d7f04296b9676a712a32a12367d1f
-- 
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-30 13:17 [PATCH v4 0/1] Introduce support for PSF control Ramakrishna Saripalli
@ 2021-04-30 13:17 ` Ramakrishna Saripalli
  2021-04-30 14:50   ` Tom Lendacky
                     ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Ramakrishna Saripalli @ 2021-04-30 13:17 UTC (permalink / raw)
  To: linux-kernel, x86, tglx, mingo, bp, hpa, Jonathan Corbet; +Cc: bsd, rsaripal

From: Ramakrishna Saripalli <rk.saripalli@amd.com>

Certain AMD processors feature a new technology called Predictive Store
Forwarding (PSF).

PSF is a micro-architectural optimization designed to improve the
performance of code execution by predicting dependencies between
loads and stores.

Incorrect PSF predictions can occur due to two reasons.

- It is possible that the load/store pair may have had dependency for
  a while but the dependency has stopped because the address in the
  load/store pair has changed.

- Second source of incorrect PSF prediction can occur because of an alias
  in the PSF predictor structure stored in the microarchitectural state.
  PSF predictor tracks load/store pair based on portions of instruction
  pointer. It is possible that a load/store pair which does have a
  dependency may be aliased by another load/store pair which does not have
  the same dependency. This can result in incorrect speculation.

  Software may be able to detect this aliasing and perform side-channel
  attacks.

All CPUs that implement PSF provide one bit to disable this feature.
If the bit to disable this feature is available, it means that the CPU
implements PSF feature and is therefore vulnerable to PSF risks.

The bits that are introduced

X86_FEATURE_PSFD: CPUID_Fn80000008_EBX[28] ("PSF disable")
	If this bit is 1, CPU implements PSF and PSF control
	via SPEC_CTRL_MSR is supported in the CPU.

All AMD processors that support PSF implement a bit in
SPEC_CTRL MSR (0x48) to disable or enable Predictive Store
Forwarding.

PSF control introduces a new kernel parameter called
	predict_store_fwd.

Kernel parameter predict_store_fwd has the following values

- off. This value is used to disable PSF on all CPUs.

- on. This value is used to enable PSF on all CPUs.
        This is also the default setting.

Signed-off-by: Ramakrishna Saripalli<rk.saripalli@amd.com>
---
 .../admin-guide/kernel-parameters.txt         |  5 ++++
 arch/x86/include/asm/cpufeatures.h            |  1 +
 arch/x86/include/asm/msr-index.h              |  2 ++
 arch/x86/kernel/cpu/amd.c                     | 23 +++++++++++++++++++
 arch/x86/kernel/cpu/bugs.c                    |  6 ++++-
 5 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 04545725f187..a4dd08bb0d3a 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3940,6 +3940,11 @@
 			Format: {"off"}
 			Disable Hardware Transactional Memory
 
+	predict_store_fwd=	[X86] This option controls PSF.
+			off - Turns off PSF.
+			on  - Turns on PSF.
+			default : on.
+
 	preempt=	[KNL]
 			Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
 			none - Limited to cond_resched() calls
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index cc96e26d69f7..078f46022293 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -309,6 +309,7 @@
 #define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
 #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
 #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
+#define X86_FEATURE_PSFD		(13*32+28) /* Predictive Store Forward Disable */
 
 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
 #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 546d6ecf0a35..f569918c8754 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -51,6 +51,8 @@
 #define SPEC_CTRL_STIBP			BIT(SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
 #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
 #define SPEC_CTRL_SSBD			BIT(SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
+#define SPEC_CTRL_PSFD_SHIFT		7
+#define SPEC_CTRL_PSFD			BIT(SPEC_CTRL_PSFD_SHIFT)	/* Predictive Store Forwarding Disable */
 
 #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
 #define PRED_CMD_IBPB			BIT(0)	   /* Indirect Branch Prediction Barrier */
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 347a956f71ca..3fdaec8090b6 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -1170,3 +1170,26 @@ void set_dr_addr_mask(unsigned long mask, int dr)
 		break;
 	}
 }
+
+static int __init psf_cmdline(char *str)
+{
+	u64 tmp = 0;
+
+	if (!boot_cpu_has(X86_FEATURE_PSFD))
+		return 0;
+
+	if (!str)
+		return -EINVAL;
+
+	if (!strcmp(str, "off")) {
+		set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
+		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
+		tmp |= SPEC_CTRL_PSFD;
+		x86_spec_ctrl_base |= tmp;
+		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+	}
+
+	return 0;
+}
+
+early_param("predict_store_fwd", psf_cmdline);
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index d41b70fe4918..536136e0daa3 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -78,6 +78,8 @@ EXPORT_SYMBOL_GPL(mds_idle_clear);
 
 void __init check_bugs(void)
 {
+	u64 tmp = 0;
+
 	identify_boot_cpu();
 
 	/*
@@ -97,7 +99,9 @@ void __init check_bugs(void)
 	 * init code as it is not enumerated and depends on the family.
 	 */
 	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
-		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
+
+	x86_spec_ctrl_base |= tmp;
 
 	/* Allow STIBP in MSR_SPEC_CTRL if supported */
 	if (boot_cpu_has(X86_FEATURE_STIBP))
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-30 13:17 ` [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control Ramakrishna Saripalli
@ 2021-04-30 14:50   ` Tom Lendacky
  2021-04-30 14:56     ` Saripalli, RK
  2021-04-30 19:42   ` Reiji Watanabe
  2021-05-05  0:11   ` Pawan Gupta
  2 siblings, 1 reply; 16+ messages in thread
From: Tom Lendacky @ 2021-04-30 14:50 UTC (permalink / raw)
  To: Ramakrishna Saripalli, linux-kernel, x86, tglx, mingo, bp, hpa,
	Jonathan Corbet
  Cc: bsd

On 4/30/21 8:17 AM, Ramakrishna Saripalli wrote:
> From: Ramakrishna Saripalli <rk.saripalli@amd.com>
> 
> Certain AMD processors feature a new technology called Predictive Store
> Forwarding (PSF).
> 
> PSF is a micro-architectural optimization designed to improve the
> performance of code execution by predicting dependencies between
> loads and stores.
> 
> Incorrect PSF predictions can occur due to two reasons.
> 
> - It is possible that the load/store pair may have had dependency for
>   a while but the dependency has stopped because the address in the
>   load/store pair has changed.
> 
> - Second source of incorrect PSF prediction can occur because of an alias
>   in the PSF predictor structure stored in the microarchitectural state.
>   PSF predictor tracks load/store pair based on portions of instruction
>   pointer. It is possible that a load/store pair which does have a
>   dependency may be aliased by another load/store pair which does not have
>   the same dependency. This can result in incorrect speculation.
> 
>   Software may be able to detect this aliasing and perform side-channel
>   attacks.
> 
> All CPUs that implement PSF provide one bit to disable this feature.
> If the bit to disable this feature is available, it means that the CPU
> implements PSF feature and is therefore vulnerable to PSF risks.
> 
> The bits that are introduced
> 
> X86_FEATURE_PSFD: CPUID_Fn80000008_EBX[28] ("PSF disable")
> 	If this bit is 1, CPU implements PSF and PSF control
> 	via SPEC_CTRL_MSR is supported in the CPU.
> 
> All AMD processors that support PSF implement a bit in
> SPEC_CTRL MSR (0x48) to disable or enable Predictive Store
> Forwarding.
> 
> PSF control introduces a new kernel parameter called
> 	predict_store_fwd.
> 
> Kernel parameter predict_store_fwd has the following values
> 
> - off. This value is used to disable PSF on all CPUs.
> 
> - on. This value is used to enable PSF on all CPUs.
>         This is also the default setting.
> 
> Signed-off-by: Ramakrishna Saripalli<rk.saripalli@amd.com>
> ---
>  .../admin-guide/kernel-parameters.txt         |  5 ++++
>  arch/x86/include/asm/cpufeatures.h            |  1 +
>  arch/x86/include/asm/msr-index.h              |  2 ++
>  arch/x86/kernel/cpu/amd.c                     | 23 +++++++++++++++++++
>  arch/x86/kernel/cpu/bugs.c                    |  6 ++++-
>  5 files changed, 36 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 04545725f187..a4dd08bb0d3a 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -3940,6 +3940,11 @@
>  			Format: {"off"}
>  			Disable Hardware Transactional Memory
>  
> +	predict_store_fwd=	[X86] This option controls PSF.
> +			off - Turns off PSF.
> +			on  - Turns on PSF.
> +			default : on.
> +
>  	preempt=	[KNL]
>  			Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
>  			none - Limited to cond_resched() calls
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index cc96e26d69f7..078f46022293 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -309,6 +309,7 @@
>  #define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
>  #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
>  #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
> +#define X86_FEATURE_PSFD		(13*32+28) /* Predictive Store Forward Disable */
>  
>  /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
>  #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
> index 546d6ecf0a35..f569918c8754 100644
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -51,6 +51,8 @@
>  #define SPEC_CTRL_STIBP			BIT(SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
>  #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
>  #define SPEC_CTRL_SSBD			BIT(SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
> +#define SPEC_CTRL_PSFD_SHIFT		7
> +#define SPEC_CTRL_PSFD			BIT(SPEC_CTRL_PSFD_SHIFT)	/* Predictive Store Forwarding Disable */
>  
>  #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
>  #define PRED_CMD_IBPB			BIT(0)	   /* Indirect Branch Prediction Barrier */
> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> index 347a956f71ca..3fdaec8090b6 100644
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -1170,3 +1170,26 @@ void set_dr_addr_mask(unsigned long mask, int dr)
>  		break;
>  	}
>  }
> +
> +static int __init psf_cmdline(char *str)
> +{
> +	u64 tmp = 0;
> +
> +	if (!boot_cpu_has(X86_FEATURE_PSFD))
> +		return 0;
> +
> +	if (!str)
> +		return -EINVAL;
> +
> +	if (!strcmp(str, "off")) {
> +		set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
> +		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
> +		tmp |= SPEC_CTRL_PSFD;
> +		x86_spec_ctrl_base |= tmp;

With the change to bugs.c, this should just be:
	x86_spec_ctrl_base |= SPEC_CTRL_PSFD;

> +		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);

Then the whole rdmsrl/or/wrmsrl could just be replaced with msr_set_bit().

I think that would do what you need.

Thanks,
Tom

> +	}
> +
> +	return 0;
> +}
> +
> +early_param("predict_store_fwd", psf_cmdline);
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index d41b70fe4918..536136e0daa3 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -78,6 +78,8 @@ EXPORT_SYMBOL_GPL(mds_idle_clear);
>  
>  void __init check_bugs(void)
>  {
> +	u64 tmp = 0;
> +
>  	identify_boot_cpu();
>  
>  	/*
> @@ -97,7 +99,9 @@ void __init check_bugs(void)
>  	 * init code as it is not enumerated and depends on the family.
>  	 */
>  	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
> -		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> +		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
> +
> +	x86_spec_ctrl_base |= tmp;
>  
>  	/* Allow STIBP in MSR_SPEC_CTRL if supported */
>  	if (boot_cpu_has(X86_FEATURE_STIBP))
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-30 14:50   ` Tom Lendacky
@ 2021-04-30 14:56     ` Saripalli, RK
  0 siblings, 0 replies; 16+ messages in thread
From: Saripalli, RK @ 2021-04-30 14:56 UTC (permalink / raw)
  To: Tom Lendacky, linux-kernel, x86, tglx, mingo, bp, hpa, Jonathan Corbet
  Cc: bsd



On 4/30/2021 9:50 AM, Tom Lendacky wrote:
> On 4/30/21 8:17 AM, Ramakrishna Saripalli wrote:
>> From: Ramakrishna Saripalli <rk.saripalli@amd.com>
>>
>> Certain AMD processors feature a new technology called Predictive Store
>> Forwarding (PSF).
>>
>> PSF is a micro-architectural optimization designed to improve the
>> performance of code execution by predicting dependencies between
>> loads and stores.
>>
>> Incorrect PSF predictions can occur due to two reasons.
>>
>> - It is possible that the load/store pair may have had dependency for
>>   a while but the dependency has stopped because the address in the
>>   load/store pair has changed.
>>
>> - Second source of incorrect PSF prediction can occur because of an alias
>>   in the PSF predictor structure stored in the microarchitectural state.
>>   PSF predictor tracks load/store pair based on portions of instruction
>>   pointer. It is possible that a load/store pair which does have a
>>   dependency may be aliased by another load/store pair which does not have
>>   the same dependency. This can result in incorrect speculation.
>>
>>   Software may be able to detect this aliasing and perform side-channel
>>   attacks.
>>
>> All CPUs that implement PSF provide one bit to disable this feature.
>> If the bit to disable this feature is available, it means that the CPU
>> implements PSF feature and is therefore vulnerable to PSF risks.
>>
>> The bits that are introduced
>>
>> X86_FEATURE_PSFD: CPUID_Fn80000008_EBX[28] ("PSF disable")
>> 	If this bit is 1, CPU implements PSF and PSF control
>> 	via SPEC_CTRL_MSR is supported in the CPU.
>>
>> All AMD processors that support PSF implement a bit in
>> SPEC_CTRL MSR (0x48) to disable or enable Predictive Store
>> Forwarding.
>>
>> PSF control introduces a new kernel parameter called
>> 	predict_store_fwd.
>>
>> Kernel parameter predict_store_fwd has the following values
>>
>> - off. This value is used to disable PSF on all CPUs.
>>
>> - on. This value is used to enable PSF on all CPUs.
>>         This is also the default setting.
>>
>> Signed-off-by: Ramakrishna Saripalli<rk.saripalli@amd.com>
>> ---
>>  .../admin-guide/kernel-parameters.txt         |  5 ++++
>>  arch/x86/include/asm/cpufeatures.h            |  1 +
>>  arch/x86/include/asm/msr-index.h              |  2 ++
>>  arch/x86/kernel/cpu/amd.c                     | 23 +++++++++++++++++++
>>  arch/x86/kernel/cpu/bugs.c                    |  6 ++++-
>>  5 files changed, 36 insertions(+), 1 deletion(-)
>>
>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>> index 04545725f187..a4dd08bb0d3a 100644
>> --- a/Documentation/admin-guide/kernel-parameters.txt
>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>> @@ -3940,6 +3940,11 @@
>>  			Format: {"off"}
>>  			Disable Hardware Transactional Memory
>>  
>> +	predict_store_fwd=	[X86] This option controls PSF.
>> +			off - Turns off PSF.
>> +			on  - Turns on PSF.
>> +			default : on.
>> +
>>  	preempt=	[KNL]
>>  			Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
>>  			none - Limited to cond_resched() calls
>> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
>> index cc96e26d69f7..078f46022293 100644
>> --- a/arch/x86/include/asm/cpufeatures.h
>> +++ b/arch/x86/include/asm/cpufeatures.h
>> @@ -309,6 +309,7 @@
>>  #define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
>>  #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
>>  #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
>> +#define X86_FEATURE_PSFD		(13*32+28) /* Predictive Store Forward Disable */
>>  
>>  /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
>>  #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
>> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
>> index 546d6ecf0a35..f569918c8754 100644
>> --- a/arch/x86/include/asm/msr-index.h
>> +++ b/arch/x86/include/asm/msr-index.h
>> @@ -51,6 +51,8 @@
>>  #define SPEC_CTRL_STIBP			BIT(SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
>>  #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
>>  #define SPEC_CTRL_SSBD			BIT(SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
>> +#define SPEC_CTRL_PSFD_SHIFT		7
>> +#define SPEC_CTRL_PSFD			BIT(SPEC_CTRL_PSFD_SHIFT)	/* Predictive Store Forwarding Disable */
>>  
>>  #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
>>  #define PRED_CMD_IBPB			BIT(0)	   /* Indirect Branch Prediction Barrier */
>> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
>> index 347a956f71ca..3fdaec8090b6 100644
>> --- a/arch/x86/kernel/cpu/amd.c
>> +++ b/arch/x86/kernel/cpu/amd.c
>> @@ -1170,3 +1170,26 @@ void set_dr_addr_mask(unsigned long mask, int dr)
>>  		break;
>>  	}
>>  }
>> +
>> +static int __init psf_cmdline(char *str)
>> +{
>> +	u64 tmp = 0;
>> +
>> +	if (!boot_cpu_has(X86_FEATURE_PSFD))
>> +		return 0;
>> +
>> +	if (!str)
>> +		return -EINVAL;
>> +
>> +	if (!strcmp(str, "off")) {
>> +		set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
>> +		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
>> +		tmp |= SPEC_CTRL_PSFD;
>> +		x86_spec_ctrl_base |= tmp;
> 
> With the change to bugs.c, this should just be:
> 	x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
> 
>> +		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> 
> Then the whole rdmsrl/or/wrmsrl could just be replaced with msr_set_bit().

Agreed. I was just being defensive here.
> 
> I think that would do what you need.
> 
> Thanks,
> Tom
> 
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +early_param("predict_store_fwd", psf_cmdline);
>> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>> index d41b70fe4918..536136e0daa3 100644
>> --- a/arch/x86/kernel/cpu/bugs.c
>> +++ b/arch/x86/kernel/cpu/bugs.c
>> @@ -78,6 +78,8 @@ EXPORT_SYMBOL_GPL(mds_idle_clear);
>>  
>>  void __init check_bugs(void)
>>  {
>> +	u64 tmp = 0;
>> +
>>  	identify_boot_cpu();
>>  
>>  	/*
>> @@ -97,7 +99,9 @@ void __init check_bugs(void)
>>  	 * init code as it is not enumerated and depends on the family.
>>  	 */
>>  	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
>> -		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>> +		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
>> +
>> +	x86_spec_ctrl_base |= tmp;
>>  
>>  	/* Allow STIBP in MSR_SPEC_CTRL if supported */
>>  	if (boot_cpu_has(X86_FEATURE_STIBP))
>>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-30 13:17 ` [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control Ramakrishna Saripalli
  2021-04-30 14:50   ` Tom Lendacky
@ 2021-04-30 19:42   ` Reiji Watanabe
  2021-04-30 19:52     ` Saripalli, RK
  2021-04-30 19:56     ` Borislav Petkov
  2021-05-05  0:11   ` Pawan Gupta
  2 siblings, 2 replies; 16+ messages in thread
From: Reiji Watanabe @ 2021-04-30 19:42 UTC (permalink / raw)
  To: Ramakrishna Saripalli
  Cc: linux-kernel, x86, tglx, mingo, bp, hpa, Jonathan Corbet, bsd

> +static int __init psf_cmdline(char *str)
> +{
> +       u64 tmp = 0;
> +
> +       if (!boot_cpu_has(X86_FEATURE_PSFD))
> +               return 0;
> +
> +       if (!str)
> +               return -EINVAL;
> +
> +       if (!strcmp(str, "off")) {
> +               set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
> +               rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
> +               tmp |= SPEC_CTRL_PSFD;
> +               x86_spec_ctrl_base |= tmp;
> +               wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> +       }
> +
> +       return 0;
> +}


Shouldn't X86_FEATURE_MSR_SPEC_CTRL always be set if the CPU has
X86_FEATURE_PSFD even if the new kernel parameter is not used ?
(e.g. set X86_FEATURE_MSR_SPEC_CTRL in init_speculation_control()
and have psf_cmdline() do the rest)

Considering KVM/virtualization for a CPU that has X86_FEATURE_PSFD
but no other existing feature with MSR_IA32_SPEC_CTRL, if a host
doesn't enable PSFD with the new parameter, the host doesn't have
X86_FEATURE_MSR_SPEC_CTRL.  Then, it would be a problem if its
guests want to use PSFD looking at x86_virt_spec_ctrl().
(I'm not sure how you will change your previous KVM patch though)

Thanks,
Reiji

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-30 19:42   ` Reiji Watanabe
@ 2021-04-30 19:52     ` Saripalli, RK
  2021-05-01  1:50       ` Reiji Watanabe
  2021-04-30 19:56     ` Borislav Petkov
  1 sibling, 1 reply; 16+ messages in thread
From: Saripalli, RK @ 2021-04-30 19:52 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: linux-kernel, x86, tglx, mingo, bp, hpa, Jonathan Corbet, bsd



On 4/30/2021 2:42 PM, Reiji Watanabe wrote:
>> +static int __init psf_cmdline(char *str)
>> +{
>> +       u64 tmp = 0;
>> +
>> +       if (!boot_cpu_has(X86_FEATURE_PSFD))
>> +               return 0;
>> +
>> +       if (!str)
>> +               return -EINVAL;
>> +
>> +       if (!strcmp(str, "off")) {
>> +               set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
>> +               rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
>> +               tmp |= SPEC_CTRL_PSFD;
>> +               x86_spec_ctrl_base |= tmp;
>> +               wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>> +       }
>> +
>> +       return 0;
>> +}
> 
> 
> Shouldn't X86_FEATURE_MSR_SPEC_CTRL always be set if the CPU has
> X86_FEATURE_PSFD even if the new kernel parameter is not used ?
> (e.g. set X86_FEATURE_MSR_SPEC_CTRL in init_speculation_control()
> and have psf_cmdline() do the rest)
> 
> Considering KVM/virtualization for a CPU that has X86_FEATURE_PSFD
> but no other existing feature with MSR_IA32_SPEC_CTRL, if a host
> doesn't enable PSFD with the new parameter, the host doesn't have
> X86_FEATURE_MSR_SPEC_CTRL.  Then, it would be a problem if its
> guests want to use PSFD looking at x86_virt_spec_ctrl().
> (I'm not sure how you will change your previous KVM patch though)

Reiji, you are correct that X86_FEATURE_MSR_SPEC_CTRL should be enabled so KVM guests can use PSFD
even if host does not use it.
I have this change in my KVM patch.

Thanks for the review,
RK
> 
> Thanks,
> Reiji
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-30 19:42   ` Reiji Watanabe
  2021-04-30 19:52     ` Saripalli, RK
@ 2021-04-30 19:56     ` Borislav Petkov
  2021-04-30 21:03       ` Reiji Watanabe
  1 sibling, 1 reply; 16+ messages in thread
From: Borislav Petkov @ 2021-04-30 19:56 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Ramakrishna Saripalli, linux-kernel, x86, tglx, mingo, hpa,
	Jonathan Corbet, bsd

On Fri, Apr 30, 2021 at 12:42:46PM -0700, Reiji Watanabe wrote:
> Then, it would be a problem if its guests want to use PSFD looking at
> x86_virt_spec_ctrl().

Well, will they want to do that? If so, why? Use case?

We decided to do this lite version to give people the opportunity to
evaluate whether there's a need to make full-blown mitigation-like,
per-thread thing like the rest of the mitigations in bugs.c or leave it
to be a chicken-bit thing.

So do you have any particular use case in mind or are you simply poking
holes in this?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-30 19:56     ` Borislav Petkov
@ 2021-04-30 21:03       ` Reiji Watanabe
  2021-04-30 21:11         ` Borislav Petkov
  0 siblings, 1 reply; 16+ messages in thread
From: Reiji Watanabe @ 2021-04-30 21:03 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Ramakrishna Saripalli, linux-kernel, x86, tglx, mingo, hpa,
	Jonathan Corbet, bsd

> > Then, it would be a problem if its guests want to use PSFD looking at
> > x86_virt_spec_ctrl().
>
> Well, will they want to do that? If so, why? Use case?
>
> We decided to do this lite version to give people the opportunity to
> evaluate whether there's a need to make full-blown mitigation-like,
> per-thread thing like the rest of the mitigations in bugs.c or leave it
> to be a chicken-bit thing.
>
> So do you have any particular use case in mind or are you simply poking
> holes in this?

I didn't mean per-thread thing but per-VM and I understand
the per-thread thing was dropped.
But, doesn't the current plan include even the per-VM control ?

Since the comments below from Ramakrishna (yesterday) mentioned
KVM/virtualization support, I assumed that there would be
per-VM control even in the current plan.
--------------------------------------------------------------
But I did test with KVM (with my patch that is not here) and I do not see
issues (meaning user space guest in QEMU is seeing PSF CPUID guest capability)
--------------------------------------------------------------
Yes this feature is needed for KVM/virtualization support.
--------------------------------------------------------------

Could you please clarify ?

Thanks,
Reiji

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-30 21:03       ` Reiji Watanabe
@ 2021-04-30 21:11         ` Borislav Petkov
  2021-05-01  1:01           ` Reiji Watanabe
  0 siblings, 1 reply; 16+ messages in thread
From: Borislav Petkov @ 2021-04-30 21:11 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Ramakrishna Saripalli, linux-kernel, x86, tglx, mingo, hpa,
	Jonathan Corbet, bsd

On Fri, Apr 30, 2021 at 02:03:07PM -0700, Reiji Watanabe wrote:
> Could you please clarify ?

Clarify what?

I asked you whether you have a VM use case. Since you're talking/asking
about virt support, you likely have some use case in mind...

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-30 21:11         ` Borislav Petkov
@ 2021-05-01  1:01           ` Reiji Watanabe
  0 siblings, 0 replies; 16+ messages in thread
From: Reiji Watanabe @ 2021-05-01  1:01 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Ramakrishna Saripalli, linux-kernel, x86, tglx, mingo, hpa,
	Jonathan Corbet, bsd

> > Could you please clarify ?
>
> Clarify what?

I'm sorry, I overlooked the response from Ramakrishna today.
So, Never mind...


> I asked you whether you have a VM use case. Since you're talking/asking
> about virt support, you likely have some use case in mind...

When PSFD is available on the host (and the feature is exposed to
its guest), basically, we would like to let the guest enable or
disable PSFD on any vCPUs as it likes.

Thanks,
Reiji

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-30 19:52     ` Saripalli, RK
@ 2021-05-01  1:50       ` Reiji Watanabe
  2021-05-04 21:01         ` Saripalli, RK
  0 siblings, 1 reply; 16+ messages in thread
From: Reiji Watanabe @ 2021-05-01  1:50 UTC (permalink / raw)
  To: Saripalli, RK
  Cc: linux-kernel, x86, tglx, mingo, Borislav Petkov, hpa,
	Jonathan Corbet, bsd

> > Considering KVM/virtualization for a CPU that has X86_FEATURE_PSFD
> > but no other existing feature with MSR_IA32_SPEC_CTRL, if a host
> > doesn't enable PSFD with the new parameter, the host doesn't have
> > X86_FEATURE_MSR_SPEC_CTRL.  Then, it would be a problem if its
> > guests want to use PSFD looking at x86_virt_spec_ctrl().
> > (I'm not sure how you will change your previous KVM patch though)
>
> Reiji, you are correct that X86_FEATURE_MSR_SPEC_CTRL should be enabled so KVM guests can use PSFD
> even if host does not use it.
> I have this change in my KVM patch.


Thank you for the response. Yes, that sounds good.

Thanks,
Reiji

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-05-01  1:50       ` Reiji Watanabe
@ 2021-05-04 21:01         ` Saripalli, RK
  2021-05-04 22:11           ` Reiji Watanabe
  0 siblings, 1 reply; 16+ messages in thread
From: Saripalli, RK @ 2021-05-04 21:01 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: linux-kernel, x86, tglx, mingo, Borislav Petkov, hpa,
	Jonathan Corbet, bsd



On 4/30/2021 8:50 PM, Reiji Watanabe wrote:
>>> Considering KVM/virtualization for a CPU that has X86_FEATURE_PSFD
>>> but no other existing feature with MSR_IA32_SPEC_CTRL, if a host
>>> doesn't enable PSFD with the new parameter, the host doesn't have
>>> X86_FEATURE_MSR_SPEC_CTRL.  Then, it would be a problem if its
>>> guests want to use PSFD looking at x86_virt_spec_ctrl().
>>> (I'm not sure how you will change your previous KVM patch though)
>>
>> Reiji, you are correct that X86_FEATURE_MSR_SPEC_CTRL should be enabled so KVM guests can use PSFD
>> even if host does not use it.
>> I have this change in my KVM patch.
> 
> 
> Thank you for the response. Yes, that sounds good.
> 
> Thanks,
> Reiji
> 
Boris / Reiji, I am wondering if I have answered all the questions on these latest patches.
Just checking to see if there are any more changes needed.

Thanks,
RK

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-05-04 21:01         ` Saripalli, RK
@ 2021-05-04 22:11           ` Reiji Watanabe
  2021-05-05  1:13             ` Saripalli, RK
  0 siblings, 1 reply; 16+ messages in thread
From: Reiji Watanabe @ 2021-05-04 22:11 UTC (permalink / raw)
  To: Saripalli, RK
  Cc: linux-kernel, x86, tglx, mingo, Borislav Petkov, hpa,
	Jonathan Corbet, bsd

> Boris / Reiji, I am wondering if I have answered all the questions on these latest patches.
> Just checking to see if there are any more changes needed.

All the questions from me were answered and I don't have any
other comment/question for the latest patch (assuming that
the patch will be updated based on the comment from Tom).

FYI.
I was going to ask you a question about x86_spec_ctrl_mask.
But, since it is used only for KVM (although x86_spec_ctrl_mask
is defined/used only in arch/x86/kernel/cpu/bugs.c),
I plan to ask you about it once your KVM patch gets updated
rather than this patch.

Thanks,
Reiji

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-04-30 13:17 ` [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control Ramakrishna Saripalli
  2021-04-30 14:50   ` Tom Lendacky
  2021-04-30 19:42   ` Reiji Watanabe
@ 2021-05-05  0:11   ` Pawan Gupta
  2021-05-05  1:11     ` Saripalli, RK
  2 siblings, 1 reply; 16+ messages in thread
From: Pawan Gupta @ 2021-05-05  0:11 UTC (permalink / raw)
  To: Ramakrishna Saripalli
  Cc: linux-kernel, x86, tglx, mingo, bp, hpa, Jonathan Corbet, bsd

On 30.04.2021 08:17, Ramakrishna Saripalli wrote:
>--- a/arch/x86/kernel/cpu/amd.c
>+++ b/arch/x86/kernel/cpu/amd.c
>@@ -1170,3 +1170,26 @@ void set_dr_addr_mask(unsigned long mask, int dr)
> 		break;
> 	}
> }
>+
>+static int __init psf_cmdline(char *str)
>+{
>+	u64 tmp = 0;
>+
>+	if (!boot_cpu_has(X86_FEATURE_PSFD))
>+		return 0;
>+
>+	if (!str)
>+		return -EINVAL;
>+
>+	if (!strcmp(str, "off")) {
>+		set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
>+		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
>+		tmp |= SPEC_CTRL_PSFD;
>+		x86_spec_ctrl_base |= tmp;

I don't think there is a need to update x86_spec_ctrl_base here.
check_bugs() already reads the MSR_IA32_SPEC_CTRL and updates
x86_spec_ctrl_base. 

>+		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>+	}
>+
>+	return 0;
>+}
>+
>+early_param("predict_store_fwd", psf_cmdline);
>diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>index d41b70fe4918..536136e0daa3 100644
>--- a/arch/x86/kernel/cpu/bugs.c
>+++ b/arch/x86/kernel/cpu/bugs.c
>@@ -78,6 +78,8 @@ EXPORT_SYMBOL_GPL(mds_idle_clear);
>
> void __init check_bugs(void)
> {
>+	u64 tmp = 0;
>+
> 	identify_boot_cpu();
>
> 	/*
>@@ -97,7 +99,9 @@ void __init check_bugs(void)
> 	 * init code as it is not enumerated and depends on the family.
> 	 */
> 	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
>-		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>+		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
>+
>+	x86_spec_ctrl_base |= tmp;

This change also doesn't seem to be necessary, psf_cmdline() updates the
MSR( i.e. sets PSFD).  Here read from the MSR will still update
x86_spec_ctrl_base to the correct value. Am I missing something?

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-05-05  0:11   ` Pawan Gupta
@ 2021-05-05  1:11     ` Saripalli, RK
  0 siblings, 0 replies; 16+ messages in thread
From: Saripalli, RK @ 2021-05-05  1:11 UTC (permalink / raw)
  To: Pawan Gupta; +Cc: linux-kernel, x86, tglx, mingo, bp, hpa, Jonathan Corbet, bsd



On 5/4/2021 7:11 PM, Pawan Gupta wrote:
> On 30.04.2021 08:17, Ramakrishna Saripalli wrote:
>> --- a/arch/x86/kernel/cpu/amd.c
>> +++ b/arch/x86/kernel/cpu/amd.c
>> @@ -1170,3 +1170,26 @@ void set_dr_addr_mask(unsigned long mask, int dr)
>>         break;
>>     }
>> }
>> +
>> +static int __init psf_cmdline(char *str)
>> +{
>> +    u64 tmp = 0;
>> +
>> +    if (!boot_cpu_has(X86_FEATURE_PSFD))
>> +        return 0;
>> +
>> +    if (!str)
>> +        return -EINVAL;
>> +
>> +    if (!strcmp(str, "off")) {
>> +        set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
>> +        rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
>> +        tmp |= SPEC_CTRL_PSFD;
>> +        x86_spec_ctrl_base |= tmp;
> 
> I don't think there is a need to update x86_spec_ctrl_base here.
> check_bugs() already reads the MSR_IA32_SPEC_CTRL and updates
> x86_spec_ctrl_base.

Pawan, you are correct. I added the update to x86_spec_ctrl_base to ensure that the bits 
in x86_spec_ctrl_base are consistent with the actual bits in the MSR after this change

>> +        wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>> +early_param("predict_store_fwd", psf_cmdline);
>> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>> index d41b70fe4918..536136e0daa3 100644
>> --- a/arch/x86/kernel/cpu/bugs.c
>> +++ b/arch/x86/kernel/cpu/bugs.c
>> @@ -78,6 +78,8 @@ EXPORT_SYMBOL_GPL(mds_idle_clear);
>>
>> void __init check_bugs(void)
>> {
>> +    u64 tmp = 0;
>> +
>>     identify_boot_cpu();
>>
>>     /*
>> @@ -97,7 +99,9 @@ void __init check_bugs(void)
>>      * init code as it is not enumerated and depends on the family.
>>      */
>>     if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
>> -        rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>> +        rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
>> +
>> +    x86_spec_ctrl_base |= tmp;
> 
> This change also doesn't seem to be necessary, psf_cmdline() updates the
> MSR( i.e. sets PSFD).  Here read from the MSR will still update
> x86_spec_ctrl_base to the correct value. Am I missing something?

Yes you are correct because psf_cmdline executes before check_bugs() and does update
the MSR. 


> 
> Thanks,
> Pawan

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
  2021-05-04 22:11           ` Reiji Watanabe
@ 2021-05-05  1:13             ` Saripalli, RK
  0 siblings, 0 replies; 16+ messages in thread
From: Saripalli, RK @ 2021-05-05  1:13 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: linux-kernel, x86, tglx, mingo, Borislav Petkov, hpa,
	Jonathan Corbet, bsd



On 5/4/2021 5:11 PM, Reiji Watanabe wrote:
>> Boris / Reiji, I am wondering if I have answered all the questions on these latest patches.
>> Just checking to see if there are any more changes needed.
> 
> All the questions from me were answered and I don't have any
> other comment/question for the latest patch (assuming that
> the patch will be updated based on the comment from Tom).

Yes, I have a new patch that takes into account Tom's suggestion. It is a minor change.
I was holding off to incorporate feedback from others (Pawan).
I will send the new patch with Tom's suggestions incorporated.

> 
> FYI.
> I was going to ask you a question about x86_spec_ctrl_mask.
> But, since it is used only for KVM (although x86_spec_ctrl_mask
> is defined/used only in arch/x86/kernel/cpu/bugs.c),
> I plan to ask you about it once your KVM patch gets updated
> rather than this patch.
> 
> Thanks,
> Reiji
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-05-05  1:13 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-30 13:17 [PATCH v4 0/1] Introduce support for PSF control Ramakrishna Saripalli
2021-04-30 13:17 ` [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control Ramakrishna Saripalli
2021-04-30 14:50   ` Tom Lendacky
2021-04-30 14:56     ` Saripalli, RK
2021-04-30 19:42   ` Reiji Watanabe
2021-04-30 19:52     ` Saripalli, RK
2021-05-01  1:50       ` Reiji Watanabe
2021-05-04 21:01         ` Saripalli, RK
2021-05-04 22:11           ` Reiji Watanabe
2021-05-05  1:13             ` Saripalli, RK
2021-04-30 19:56     ` Borislav Petkov
2021-04-30 21:03       ` Reiji Watanabe
2021-04-30 21:11         ` Borislav Petkov
2021-05-01  1:01           ` Reiji Watanabe
2021-05-05  0:11   ` Pawan Gupta
2021-05-05  1:11     ` Saripalli, RK

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).