[v4,1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.
diff mbox series

Message ID 20210430131733.192414-2-rsaripal@amd.com
State New, archived
Headers show
Series
  • Introduce support for PSF control.
Related show

Commit Message

Saripalli, RK April 30, 2021, 1:17 p.m. UTC
From: Ramakrishna Saripalli <rk.saripalli@amd.com>

Certain AMD processors feature a new technology called Predictive Store
Forwarding (PSF).

PSF is a micro-architectural optimization designed to improve the
performance of code execution by predicting dependencies between
loads and stores.

Incorrect PSF predictions can occur due to two reasons.

- It is possible that the load/store pair may have had dependency for
  a while but the dependency has stopped because the address in the
  load/store pair has changed.

- Second source of incorrect PSF prediction can occur because of an alias
  in the PSF predictor structure stored in the microarchitectural state.
  PSF predictor tracks load/store pair based on portions of instruction
  pointer. It is possible that a load/store pair which does have a
  dependency may be aliased by another load/store pair which does not have
  the same dependency. This can result in incorrect speculation.

  Software may be able to detect this aliasing and perform side-channel
  attacks.

All CPUs that implement PSF provide one bit to disable this feature.
If the bit to disable this feature is available, it means that the CPU
implements PSF feature and is therefore vulnerable to PSF risks.

The bits that are introduced

X86_FEATURE_PSFD: CPUID_Fn80000008_EBX[28] ("PSF disable")
	If this bit is 1, CPU implements PSF and PSF control
	via SPEC_CTRL_MSR is supported in the CPU.

All AMD processors that support PSF implement a bit in
SPEC_CTRL MSR (0x48) to disable or enable Predictive Store
Forwarding.

PSF control introduces a new kernel parameter called
	predict_store_fwd.

Kernel parameter predict_store_fwd has the following values

- off. This value is used to disable PSF on all CPUs.

- on. This value is used to enable PSF on all CPUs.
        This is also the default setting.

Signed-off-by: Ramakrishna Saripalli<rk.saripalli@amd.com>
---
 .../admin-guide/kernel-parameters.txt         |  5 ++++
 arch/x86/include/asm/cpufeatures.h            |  1 +
 arch/x86/include/asm/msr-index.h              |  2 ++
 arch/x86/kernel/cpu/amd.c                     | 23 +++++++++++++++++++
 arch/x86/kernel/cpu/bugs.c                    |  6 ++++-
 5 files changed, 36 insertions(+), 1 deletion(-)

Comments

Tom Lendacky April 30, 2021, 2:50 p.m. UTC | #1
On 4/30/21 8:17 AM, Ramakrishna Saripalli wrote:
> From: Ramakrishna Saripalli <rk.saripalli@amd.com>
> 
> Certain AMD processors feature a new technology called Predictive Store
> Forwarding (PSF).
> 
> PSF is a micro-architectural optimization designed to improve the
> performance of code execution by predicting dependencies between
> loads and stores.
> 
> Incorrect PSF predictions can occur due to two reasons.
> 
> - It is possible that the load/store pair may have had dependency for
>   a while but the dependency has stopped because the address in the
>   load/store pair has changed.
> 
> - Second source of incorrect PSF prediction can occur because of an alias
>   in the PSF predictor structure stored in the microarchitectural state.
>   PSF predictor tracks load/store pair based on portions of instruction
>   pointer. It is possible that a load/store pair which does have a
>   dependency may be aliased by another load/store pair which does not have
>   the same dependency. This can result in incorrect speculation.
> 
>   Software may be able to detect this aliasing and perform side-channel
>   attacks.
> 
> All CPUs that implement PSF provide one bit to disable this feature.
> If the bit to disable this feature is available, it means that the CPU
> implements PSF feature and is therefore vulnerable to PSF risks.
> 
> The bits that are introduced
> 
> X86_FEATURE_PSFD: CPUID_Fn80000008_EBX[28] ("PSF disable")
> 	If this bit is 1, CPU implements PSF and PSF control
> 	via SPEC_CTRL_MSR is supported in the CPU.
> 
> All AMD processors that support PSF implement a bit in
> SPEC_CTRL MSR (0x48) to disable or enable Predictive Store
> Forwarding.
> 
> PSF control introduces a new kernel parameter called
> 	predict_store_fwd.
> 
> Kernel parameter predict_store_fwd has the following values
> 
> - off. This value is used to disable PSF on all CPUs.
> 
> - on. This value is used to enable PSF on all CPUs.
>         This is also the default setting.
> 
> Signed-off-by: Ramakrishna Saripalli<rk.saripalli@amd.com>
> ---
>  .../admin-guide/kernel-parameters.txt         |  5 ++++
>  arch/x86/include/asm/cpufeatures.h            |  1 +
>  arch/x86/include/asm/msr-index.h              |  2 ++
>  arch/x86/kernel/cpu/amd.c                     | 23 +++++++++++++++++++
>  arch/x86/kernel/cpu/bugs.c                    |  6 ++++-
>  5 files changed, 36 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 04545725f187..a4dd08bb0d3a 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -3940,6 +3940,11 @@
>  			Format: {"off"}
>  			Disable Hardware Transactional Memory
>  
> +	predict_store_fwd=	[X86] This option controls PSF.
> +			off - Turns off PSF.
> +			on  - Turns on PSF.
> +			default : on.
> +
>  	preempt=	[KNL]
>  			Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
>  			none - Limited to cond_resched() calls
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index cc96e26d69f7..078f46022293 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -309,6 +309,7 @@
>  #define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
>  #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
>  #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
> +#define X86_FEATURE_PSFD		(13*32+28) /* Predictive Store Forward Disable */
>  
>  /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
>  #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
> index 546d6ecf0a35..f569918c8754 100644
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -51,6 +51,8 @@
>  #define SPEC_CTRL_STIBP			BIT(SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
>  #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
>  #define SPEC_CTRL_SSBD			BIT(SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
> +#define SPEC_CTRL_PSFD_SHIFT		7
> +#define SPEC_CTRL_PSFD			BIT(SPEC_CTRL_PSFD_SHIFT)	/* Predictive Store Forwarding Disable */
>  
>  #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
>  #define PRED_CMD_IBPB			BIT(0)	   /* Indirect Branch Prediction Barrier */
> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> index 347a956f71ca..3fdaec8090b6 100644
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -1170,3 +1170,26 @@ void set_dr_addr_mask(unsigned long mask, int dr)
>  		break;
>  	}
>  }
> +
> +static int __init psf_cmdline(char *str)
> +{
> +	u64 tmp = 0;
> +
> +	if (!boot_cpu_has(X86_FEATURE_PSFD))
> +		return 0;
> +
> +	if (!str)
> +		return -EINVAL;
> +
> +	if (!strcmp(str, "off")) {
> +		set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
> +		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
> +		tmp |= SPEC_CTRL_PSFD;
> +		x86_spec_ctrl_base |= tmp;

With the change to bugs.c, this should just be:
	x86_spec_ctrl_base |= SPEC_CTRL_PSFD;

> +		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);

Then the whole rdmsrl/or/wrmsrl could just be replaced with msr_set_bit().

I think that would do what you need.

Thanks,
Tom

> +	}
> +
> +	return 0;
> +}
> +
> +early_param("predict_store_fwd", psf_cmdline);
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index d41b70fe4918..536136e0daa3 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -78,6 +78,8 @@ EXPORT_SYMBOL_GPL(mds_idle_clear);
>  
>  void __init check_bugs(void)
>  {
> +	u64 tmp = 0;
> +
>  	identify_boot_cpu();
>  
>  	/*
> @@ -97,7 +99,9 @@ void __init check_bugs(void)
>  	 * init code as it is not enumerated and depends on the family.
>  	 */
>  	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
> -		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> +		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
> +
> +	x86_spec_ctrl_base |= tmp;
>  
>  	/* Allow STIBP in MSR_SPEC_CTRL if supported */
>  	if (boot_cpu_has(X86_FEATURE_STIBP))
>
Saripalli, RK April 30, 2021, 2:56 p.m. UTC | #2
On 4/30/2021 9:50 AM, Tom Lendacky wrote:
> On 4/30/21 8:17 AM, Ramakrishna Saripalli wrote:
>> From: Ramakrishna Saripalli <rk.saripalli@amd.com>
>>
>> Certain AMD processors feature a new technology called Predictive Store
>> Forwarding (PSF).
>>
>> PSF is a micro-architectural optimization designed to improve the
>> performance of code execution by predicting dependencies between
>> loads and stores.
>>
>> Incorrect PSF predictions can occur due to two reasons.
>>
>> - It is possible that the load/store pair may have had dependency for
>>   a while but the dependency has stopped because the address in the
>>   load/store pair has changed.
>>
>> - Second source of incorrect PSF prediction can occur because of an alias
>>   in the PSF predictor structure stored in the microarchitectural state.
>>   PSF predictor tracks load/store pair based on portions of instruction
>>   pointer. It is possible that a load/store pair which does have a
>>   dependency may be aliased by another load/store pair which does not have
>>   the same dependency. This can result in incorrect speculation.
>>
>>   Software may be able to detect this aliasing and perform side-channel
>>   attacks.
>>
>> All CPUs that implement PSF provide one bit to disable this feature.
>> If the bit to disable this feature is available, it means that the CPU
>> implements PSF feature and is therefore vulnerable to PSF risks.
>>
>> The bits that are introduced
>>
>> X86_FEATURE_PSFD: CPUID_Fn80000008_EBX[28] ("PSF disable")
>> 	If this bit is 1, CPU implements PSF and PSF control
>> 	via SPEC_CTRL_MSR is supported in the CPU.
>>
>> All AMD processors that support PSF implement a bit in
>> SPEC_CTRL MSR (0x48) to disable or enable Predictive Store
>> Forwarding.
>>
>> PSF control introduces a new kernel parameter called
>> 	predict_store_fwd.
>>
>> Kernel parameter predict_store_fwd has the following values
>>
>> - off. This value is used to disable PSF on all CPUs.
>>
>> - on. This value is used to enable PSF on all CPUs.
>>         This is also the default setting.
>>
>> Signed-off-by: Ramakrishna Saripalli<rk.saripalli@amd.com>
>> ---
>>  .../admin-guide/kernel-parameters.txt         |  5 ++++
>>  arch/x86/include/asm/cpufeatures.h            |  1 +
>>  arch/x86/include/asm/msr-index.h              |  2 ++
>>  arch/x86/kernel/cpu/amd.c                     | 23 +++++++++++++++++++
>>  arch/x86/kernel/cpu/bugs.c                    |  6 ++++-
>>  5 files changed, 36 insertions(+), 1 deletion(-)
>>
>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>> index 04545725f187..a4dd08bb0d3a 100644
>> --- a/Documentation/admin-guide/kernel-parameters.txt
>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>> @@ -3940,6 +3940,11 @@
>>  			Format: {"off"}
>>  			Disable Hardware Transactional Memory
>>  
>> +	predict_store_fwd=	[X86] This option controls PSF.
>> +			off - Turns off PSF.
>> +			on  - Turns on PSF.
>> +			default : on.
>> +
>>  	preempt=	[KNL]
>>  			Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
>>  			none - Limited to cond_resched() calls
>> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
>> index cc96e26d69f7..078f46022293 100644
>> --- a/arch/x86/include/asm/cpufeatures.h
>> +++ b/arch/x86/include/asm/cpufeatures.h
>> @@ -309,6 +309,7 @@
>>  #define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
>>  #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
>>  #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
>> +#define X86_FEATURE_PSFD		(13*32+28) /* Predictive Store Forward Disable */
>>  
>>  /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
>>  #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
>> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
>> index 546d6ecf0a35..f569918c8754 100644
>> --- a/arch/x86/include/asm/msr-index.h
>> +++ b/arch/x86/include/asm/msr-index.h
>> @@ -51,6 +51,8 @@
>>  #define SPEC_CTRL_STIBP			BIT(SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
>>  #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
>>  #define SPEC_CTRL_SSBD			BIT(SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
>> +#define SPEC_CTRL_PSFD_SHIFT		7
>> +#define SPEC_CTRL_PSFD			BIT(SPEC_CTRL_PSFD_SHIFT)	/* Predictive Store Forwarding Disable */
>>  
>>  #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
>>  #define PRED_CMD_IBPB			BIT(0)	   /* Indirect Branch Prediction Barrier */
>> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
>> index 347a956f71ca..3fdaec8090b6 100644
>> --- a/arch/x86/kernel/cpu/amd.c
>> +++ b/arch/x86/kernel/cpu/amd.c
>> @@ -1170,3 +1170,26 @@ void set_dr_addr_mask(unsigned long mask, int dr)
>>  		break;
>>  	}
>>  }
>> +
>> +static int __init psf_cmdline(char *str)
>> +{
>> +	u64 tmp = 0;
>> +
>> +	if (!boot_cpu_has(X86_FEATURE_PSFD))
>> +		return 0;
>> +
>> +	if (!str)
>> +		return -EINVAL;
>> +
>> +	if (!strcmp(str, "off")) {
>> +		set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
>> +		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
>> +		tmp |= SPEC_CTRL_PSFD;
>> +		x86_spec_ctrl_base |= tmp;
> 
> With the change to bugs.c, this should just be:
> 	x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
> 
>> +		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> 
> Then the whole rdmsrl/or/wrmsrl could just be replaced with msr_set_bit().

Agreed. I was just being defensive here.
> 
> I think that would do what you need.
> 
> Thanks,
> Tom
> 
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +early_param("predict_store_fwd", psf_cmdline);
>> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>> index d41b70fe4918..536136e0daa3 100644
>> --- a/arch/x86/kernel/cpu/bugs.c
>> +++ b/arch/x86/kernel/cpu/bugs.c
>> @@ -78,6 +78,8 @@ EXPORT_SYMBOL_GPL(mds_idle_clear);
>>  
>>  void __init check_bugs(void)
>>  {
>> +	u64 tmp = 0;
>> +
>>  	identify_boot_cpu();
>>  
>>  	/*
>> @@ -97,7 +99,9 @@ void __init check_bugs(void)
>>  	 * init code as it is not enumerated and depends on the family.
>>  	 */
>>  	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
>> -		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>> +		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
>> +
>> +	x86_spec_ctrl_base |= tmp;
>>  
>>  	/* Allow STIBP in MSR_SPEC_CTRL if supported */
>>  	if (boot_cpu_has(X86_FEATURE_STIBP))
>>
Reiji Watanabe April 30, 2021, 7:42 p.m. UTC | #3
> +static int __init psf_cmdline(char *str)
> +{
> +       u64 tmp = 0;
> +
> +       if (!boot_cpu_has(X86_FEATURE_PSFD))
> +               return 0;
> +
> +       if (!str)
> +               return -EINVAL;
> +
> +       if (!strcmp(str, "off")) {
> +               set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
> +               rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
> +               tmp |= SPEC_CTRL_PSFD;
> +               x86_spec_ctrl_base |= tmp;
> +               wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> +       }
> +
> +       return 0;
> +}


Shouldn't X86_FEATURE_MSR_SPEC_CTRL always be set if the CPU has
X86_FEATURE_PSFD even if the new kernel parameter is not used ?
(e.g. set X86_FEATURE_MSR_SPEC_CTRL in init_speculation_control()
and have psf_cmdline() do the rest)

Considering KVM/virtualization for a CPU that has X86_FEATURE_PSFD
but no other existing feature with MSR_IA32_SPEC_CTRL, if a host
doesn't enable PSFD with the new parameter, the host doesn't have
X86_FEATURE_MSR_SPEC_CTRL.  Then, it would be a problem if its
guests want to use PSFD looking at x86_virt_spec_ctrl().
(I'm not sure how you will change your previous KVM patch though)

Thanks,
Reiji
Saripalli, RK April 30, 2021, 7:52 p.m. UTC | #4
On 4/30/2021 2:42 PM, Reiji Watanabe wrote:
>> +static int __init psf_cmdline(char *str)
>> +{
>> +       u64 tmp = 0;
>> +
>> +       if (!boot_cpu_has(X86_FEATURE_PSFD))
>> +               return 0;
>> +
>> +       if (!str)
>> +               return -EINVAL;
>> +
>> +       if (!strcmp(str, "off")) {
>> +               set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
>> +               rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
>> +               tmp |= SPEC_CTRL_PSFD;
>> +               x86_spec_ctrl_base |= tmp;
>> +               wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>> +       }
>> +
>> +       return 0;
>> +}
> 
> 
> Shouldn't X86_FEATURE_MSR_SPEC_CTRL always be set if the CPU has
> X86_FEATURE_PSFD even if the new kernel parameter is not used ?
> (e.g. set X86_FEATURE_MSR_SPEC_CTRL in init_speculation_control()
> and have psf_cmdline() do the rest)
> 
> Considering KVM/virtualization for a CPU that has X86_FEATURE_PSFD
> but no other existing feature with MSR_IA32_SPEC_CTRL, if a host
> doesn't enable PSFD with the new parameter, the host doesn't have
> X86_FEATURE_MSR_SPEC_CTRL.  Then, it would be a problem if its
> guests want to use PSFD looking at x86_virt_spec_ctrl().
> (I'm not sure how you will change your previous KVM patch though)

Reiji, you are correct that X86_FEATURE_MSR_SPEC_CTRL should be enabled so KVM guests can use PSFD
even if host does not use it.
I have this change in my KVM patch.

Thanks for the review,
RK
> 
> Thanks,
> Reiji
>
Borislav Petkov April 30, 2021, 7:56 p.m. UTC | #5
On Fri, Apr 30, 2021 at 12:42:46PM -0700, Reiji Watanabe wrote:
> Then, it would be a problem if its guests want to use PSFD looking at
> x86_virt_spec_ctrl().

Well, will they want to do that? If so, why? Use case?

We decided to do this lite version to give people the opportunity to
evaluate whether there's a need to make full-blown mitigation-like,
per-thread thing like the rest of the mitigations in bugs.c or leave it
to be a chicken-bit thing.

So do you have any particular use case in mind or are you simply poking
holes in this?
Reiji Watanabe April 30, 2021, 9:03 p.m. UTC | #6
> > Then, it would be a problem if its guests want to use PSFD looking at
> > x86_virt_spec_ctrl().
>
> Well, will they want to do that? If so, why? Use case?
>
> We decided to do this lite version to give people the opportunity to
> evaluate whether there's a need to make full-blown mitigation-like,
> per-thread thing like the rest of the mitigations in bugs.c or leave it
> to be a chicken-bit thing.
>
> So do you have any particular use case in mind or are you simply poking
> holes in this?

I didn't mean per-thread thing but per-VM and I understand
the per-thread thing was dropped.
But, doesn't the current plan include even the per-VM control ?

Since the comments below from Ramakrishna (yesterday) mentioned
KVM/virtualization support, I assumed that there would be
per-VM control even in the current plan.
--------------------------------------------------------------
But I did test with KVM (with my patch that is not here) and I do not see
issues (meaning user space guest in QEMU is seeing PSF CPUID guest capability)
--------------------------------------------------------------
Yes this feature is needed for KVM/virtualization support.
--------------------------------------------------------------

Could you please clarify ?

Thanks,
Reiji
Borislav Petkov April 30, 2021, 9:11 p.m. UTC | #7
On Fri, Apr 30, 2021 at 02:03:07PM -0700, Reiji Watanabe wrote:
> Could you please clarify ?

Clarify what?

I asked you whether you have a VM use case. Since you're talking/asking
about virt support, you likely have some use case in mind...
Reiji Watanabe May 1, 2021, 1:01 a.m. UTC | #8
> > Could you please clarify ?
>
> Clarify what?

I'm sorry, I overlooked the response from Ramakrishna today.
So, Never mind...


> I asked you whether you have a VM use case. Since you're talking/asking
> about virt support, you likely have some use case in mind...

When PSFD is available on the host (and the feature is exposed to
its guest), basically, we would like to let the guest enable or
disable PSFD on any vCPUs as it likes.

Thanks,
Reiji
Reiji Watanabe May 1, 2021, 1:50 a.m. UTC | #9
> > Considering KVM/virtualization for a CPU that has X86_FEATURE_PSFD
> > but no other existing feature with MSR_IA32_SPEC_CTRL, if a host
> > doesn't enable PSFD with the new parameter, the host doesn't have
> > X86_FEATURE_MSR_SPEC_CTRL.  Then, it would be a problem if its
> > guests want to use PSFD looking at x86_virt_spec_ctrl().
> > (I'm not sure how you will change your previous KVM patch though)
>
> Reiji, you are correct that X86_FEATURE_MSR_SPEC_CTRL should be enabled so KVM guests can use PSFD
> even if host does not use it.
> I have this change in my KVM patch.


Thank you for the response. Yes, that sounds good.

Thanks,
Reiji
Saripalli, RK May 4, 2021, 9:01 p.m. UTC | #10
On 4/30/2021 8:50 PM, Reiji Watanabe wrote:
>>> Considering KVM/virtualization for a CPU that has X86_FEATURE_PSFD
>>> but no other existing feature with MSR_IA32_SPEC_CTRL, if a host
>>> doesn't enable PSFD with the new parameter, the host doesn't have
>>> X86_FEATURE_MSR_SPEC_CTRL.  Then, it would be a problem if its
>>> guests want to use PSFD looking at x86_virt_spec_ctrl().
>>> (I'm not sure how you will change your previous KVM patch though)
>>
>> Reiji, you are correct that X86_FEATURE_MSR_SPEC_CTRL should be enabled so KVM guests can use PSFD
>> even if host does not use it.
>> I have this change in my KVM patch.
> 
> 
> Thank you for the response. Yes, that sounds good.
> 
> Thanks,
> Reiji
> 
Boris / Reiji, I am wondering if I have answered all the questions on these latest patches.
Just checking to see if there are any more changes needed.

Thanks,
RK
Reiji Watanabe May 4, 2021, 10:11 p.m. UTC | #11
> Boris / Reiji, I am wondering if I have answered all the questions on these latest patches.
> Just checking to see if there are any more changes needed.

All the questions from me were answered and I don't have any
other comment/question for the latest patch (assuming that
the patch will be updated based on the comment from Tom).

FYI.
I was going to ask you a question about x86_spec_ctrl_mask.
But, since it is used only for KVM (although x86_spec_ctrl_mask
is defined/used only in arch/x86/kernel/cpu/bugs.c),
I plan to ask you about it once your KVM patch gets updated
rather than this patch.

Thanks,
Reiji
Pawan Gupta May 5, 2021, 12:11 a.m. UTC | #12
On 30.04.2021 08:17, Ramakrishna Saripalli wrote:
>--- a/arch/x86/kernel/cpu/amd.c
>+++ b/arch/x86/kernel/cpu/amd.c
>@@ -1170,3 +1170,26 @@ void set_dr_addr_mask(unsigned long mask, int dr)
> 		break;
> 	}
> }
>+
>+static int __init psf_cmdline(char *str)
>+{
>+	u64 tmp = 0;
>+
>+	if (!boot_cpu_has(X86_FEATURE_PSFD))
>+		return 0;
>+
>+	if (!str)
>+		return -EINVAL;
>+
>+	if (!strcmp(str, "off")) {
>+		set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
>+		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
>+		tmp |= SPEC_CTRL_PSFD;
>+		x86_spec_ctrl_base |= tmp;

I don't think there is a need to update x86_spec_ctrl_base here.
check_bugs() already reads the MSR_IA32_SPEC_CTRL and updates
x86_spec_ctrl_base. 

>+		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>+	}
>+
>+	return 0;
>+}
>+
>+early_param("predict_store_fwd", psf_cmdline);
>diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>index d41b70fe4918..536136e0daa3 100644
>--- a/arch/x86/kernel/cpu/bugs.c
>+++ b/arch/x86/kernel/cpu/bugs.c
>@@ -78,6 +78,8 @@ EXPORT_SYMBOL_GPL(mds_idle_clear);
>
> void __init check_bugs(void)
> {
>+	u64 tmp = 0;
>+
> 	identify_boot_cpu();
>
> 	/*
>@@ -97,7 +99,9 @@ void __init check_bugs(void)
> 	 * init code as it is not enumerated and depends on the family.
> 	 */
> 	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
>-		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>+		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
>+
>+	x86_spec_ctrl_base |= tmp;

This change also doesn't seem to be necessary, psf_cmdline() updates the
MSR( i.e. sets PSFD).  Here read from the MSR will still update
x86_spec_ctrl_base to the correct value. Am I missing something?

Thanks,
Pawan
Saripalli, RK May 5, 2021, 1:11 a.m. UTC | #13
On 5/4/2021 7:11 PM, Pawan Gupta wrote:
> On 30.04.2021 08:17, Ramakrishna Saripalli wrote:
>> --- a/arch/x86/kernel/cpu/amd.c
>> +++ b/arch/x86/kernel/cpu/amd.c
>> @@ -1170,3 +1170,26 @@ void set_dr_addr_mask(unsigned long mask, int dr)
>>         break;
>>     }
>> }
>> +
>> +static int __init psf_cmdline(char *str)
>> +{
>> +    u64 tmp = 0;
>> +
>> +    if (!boot_cpu_has(X86_FEATURE_PSFD))
>> +        return 0;
>> +
>> +    if (!str)
>> +        return -EINVAL;
>> +
>> +    if (!strcmp(str, "off")) {
>> +        set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
>> +        rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
>> +        tmp |= SPEC_CTRL_PSFD;
>> +        x86_spec_ctrl_base |= tmp;
> 
> I don't think there is a need to update x86_spec_ctrl_base here.
> check_bugs() already reads the MSR_IA32_SPEC_CTRL and updates
> x86_spec_ctrl_base.

Pawan, you are correct. I added the update to x86_spec_ctrl_base to ensure that the bits 
in x86_spec_ctrl_base are consistent with the actual bits in the MSR after this change

>> +        wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>> +early_param("predict_store_fwd", psf_cmdline);
>> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>> index d41b70fe4918..536136e0daa3 100644
>> --- a/arch/x86/kernel/cpu/bugs.c
>> +++ b/arch/x86/kernel/cpu/bugs.c
>> @@ -78,6 +78,8 @@ EXPORT_SYMBOL_GPL(mds_idle_clear);
>>
>> void __init check_bugs(void)
>> {
>> +    u64 tmp = 0;
>> +
>>     identify_boot_cpu();
>>
>>     /*
>> @@ -97,7 +99,9 @@ void __init check_bugs(void)
>>      * init code as it is not enumerated and depends on the family.
>>      */
>>     if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
>> -        rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>> +        rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
>> +
>> +    x86_spec_ctrl_base |= tmp;
> 
> This change also doesn't seem to be necessary, psf_cmdline() updates the
> MSR( i.e. sets PSFD).  Here read from the MSR will still update
> x86_spec_ctrl_base to the correct value. Am I missing something?

Yes you are correct because psf_cmdline executes before check_bugs() and does update
the MSR. 


> 
> Thanks,
> Pawan
Saripalli, RK May 5, 2021, 1:13 a.m. UTC | #14
On 5/4/2021 5:11 PM, Reiji Watanabe wrote:
>> Boris / Reiji, I am wondering if I have answered all the questions on these latest patches.
>> Just checking to see if there are any more changes needed.
> 
> All the questions from me were answered and I don't have any
> other comment/question for the latest patch (assuming that
> the patch will be updated based on the comment from Tom).

Yes, I have a new patch that takes into account Tom's suggestion. It is a minor change.
I was holding off to incorporate feedback from others (Pawan).
I will send the new patch with Tom's suggestions incorporated.

> 
> FYI.
> I was going to ask you a question about x86_spec_ctrl_mask.
> But, since it is used only for KVM (although x86_spec_ctrl_mask
> is defined/used only in arch/x86/kernel/cpu/bugs.c),
> I plan to ask you about it once your KVM patch gets updated
> rather than this patch.
> 
> Thanks,
> Reiji
>

Patch
diff mbox series

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 04545725f187..a4dd08bb0d3a 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3940,6 +3940,11 @@ 
 			Format: {"off"}
 			Disable Hardware Transactional Memory
 
+	predict_store_fwd=	[X86] This option controls PSF.
+			off - Turns off PSF.
+			on  - Turns on PSF.
+			default : on.
+
 	preempt=	[KNL]
 			Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
 			none - Limited to cond_resched() calls
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index cc96e26d69f7..078f46022293 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -309,6 +309,7 @@ 
 #define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
 #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
 #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
+#define X86_FEATURE_PSFD		(13*32+28) /* Predictive Store Forward Disable */
 
 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
 #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 546d6ecf0a35..f569918c8754 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -51,6 +51,8 @@ 
 #define SPEC_CTRL_STIBP			BIT(SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
 #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
 #define SPEC_CTRL_SSBD			BIT(SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
+#define SPEC_CTRL_PSFD_SHIFT		7
+#define SPEC_CTRL_PSFD			BIT(SPEC_CTRL_PSFD_SHIFT)	/* Predictive Store Forwarding Disable */
 
 #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
 #define PRED_CMD_IBPB			BIT(0)	   /* Indirect Branch Prediction Barrier */
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 347a956f71ca..3fdaec8090b6 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -1170,3 +1170,26 @@  void set_dr_addr_mask(unsigned long mask, int dr)
 		break;
 	}
 }
+
+static int __init psf_cmdline(char *str)
+{
+	u64 tmp = 0;
+
+	if (!boot_cpu_has(X86_FEATURE_PSFD))
+		return 0;
+
+	if (!str)
+		return -EINVAL;
+
+	if (!strcmp(str, "off")) {
+		set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
+		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
+		tmp |= SPEC_CTRL_PSFD;
+		x86_spec_ctrl_base |= tmp;
+		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+	}
+
+	return 0;
+}
+
+early_param("predict_store_fwd", psf_cmdline);
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index d41b70fe4918..536136e0daa3 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -78,6 +78,8 @@  EXPORT_SYMBOL_GPL(mds_idle_clear);
 
 void __init check_bugs(void)
 {
+	u64 tmp = 0;
+
 	identify_boot_cpu();
 
 	/*
@@ -97,7 +99,9 @@  void __init check_bugs(void)
 	 * init code as it is not enumerated and depends on the family.
 	 */
 	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
-		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+		rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
+
+	x86_spec_ctrl_base |= tmp;
 
 	/* Allow STIBP in MSR_SPEC_CTRL if supported */
 	if (boot_cpu_has(X86_FEATURE_STIBP))