All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] KVM: SVM: Issue WBINVD after deactivating an SEV guest
@ 2020-03-20 16:07 Tom Lendacky
  2020-03-20 17:43 ` Paolo Bonzini
  2020-03-20 20:34 ` David Rientjes
  0 siblings, 2 replies; 7+ messages in thread
From: Tom Lendacky @ 2020-03-20 16:07 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Brijesh Singh, David Rientjes

Currently, CLFLUSH is used to flush SEV guest memory before the guest is
terminated (or a memory hotplug region is removed). However, CLFLUSH is
not enough to ensure that SEV guest tagged data is flushed from the cache.

With 33af3a7ef9e6 ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations"), the
original WBINVD was removed. This then exposed crashes at random times
because of a cache flush race with a page that had both a hypervisor and
a guest tag in the cache.

Restore the WBINVD when destroying an SEV guest and add a WBINVD to the
svm_unregister_enc_region() function to ensure hotplug memory is flushed
when removed. The DF_FLUSH can still be avoided at this point.

Fixes: 33af3a7ef9e6 ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations")
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kvm/svm.c | 22 ++++++++++++++--------
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 08568ae9f7a1..d54cdca9c140 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1980,14 +1980,6 @@ static void sev_clflush_pages(struct page *pages[], unsigned long npages)
 static void __unregister_enc_region_locked(struct kvm *kvm,
 					   struct enc_region *region)
 {
-	/*
-	 * The guest may change the memory encryption attribute from C=0 -> C=1
-	 * or vice versa for this memory range. Lets make sure caches are
-	 * flushed to ensure that guest data gets written into memory with
-	 * correct C-bit.
-	 */
-	sev_clflush_pages(region->pages, region->npages);
-
 	sev_unpin_memory(kvm, region->pages, region->npages);
 	list_del(&region->list);
 	kfree(region);
@@ -2004,6 +1996,13 @@ static void sev_vm_destroy(struct kvm *kvm)
 
 	mutex_lock(&kvm->lock);
 
+	/*
+	 * Ensure that all guest tagged cache entries are flushed before
+	 * releasing the pages back to the system for use. CLFLUSH will
+	 * not do this, so issue a WBINVD.
+	 */
+	wbinvd_on_all_cpus();
+
 	/*
 	 * if userspace was terminated before unregistering the memory regions
 	 * then lets unpin all the registered memory.
@@ -7247,6 +7246,13 @@ static int svm_unregister_enc_region(struct kvm *kvm,
 		goto failed;
 	}
 
+	/*
+	 * Ensure that all guest tagged cache entries are flushed before
+	 * releasing the pages back to the system for use. CLFLUSH will
+	 * not do this, so issue a WBINVD.
+	 */
+	wbinvd_on_all_cpus();
+
 	__unregister_enc_region_locked(kvm, region);
 
 	mutex_unlock(&kvm->lock);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] KVM: SVM: Issue WBINVD after deactivating an SEV guest
  2020-03-20 16:07 [PATCH] KVM: SVM: Issue WBINVD after deactivating an SEV guest Tom Lendacky
@ 2020-03-20 17:43 ` Paolo Bonzini
  2020-03-20 20:34 ` David Rientjes
  1 sibling, 0 replies; 7+ messages in thread
From: Paolo Bonzini @ 2020-03-20 17:43 UTC (permalink / raw)
  To: Tom Lendacky, kvm, linux-kernel
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Brijesh Singh, David Rientjes

On 20/03/20 17:07, Tom Lendacky wrote:
> Currently, CLFLUSH is used to flush SEV guest memory before the guest is
> terminated (or a memory hotplug region is removed). However, CLFLUSH is
> not enough to ensure that SEV guest tagged data is flushed from the cache.
> 
> With 33af3a7ef9e6 ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations"), the
> original WBINVD was removed. This then exposed crashes at random times
> because of a cache flush race with a page that had both a hypervisor and
> a guest tag in the cache.
> 
> Restore the WBINVD when destroying an SEV guest and add a WBINVD to the
> svm_unregister_enc_region() function to ensure hotplug memory is flushed
> when removed. The DF_FLUSH can still be avoided at this point.
> 
> Fixes: 33af3a7ef9e6 ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations")
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> ---
>  arch/x86/kvm/svm.c | 22 ++++++++++++++--------
>  1 file changed, 14 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 08568ae9f7a1..d54cdca9c140 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1980,14 +1980,6 @@ static void sev_clflush_pages(struct page *pages[], unsigned long npages)
>  static void __unregister_enc_region_locked(struct kvm *kvm,
>  					   struct enc_region *region)
>  {
> -	/*
> -	 * The guest may change the memory encryption attribute from C=0 -> C=1
> -	 * or vice versa for this memory range. Lets make sure caches are
> -	 * flushed to ensure that guest data gets written into memory with
> -	 * correct C-bit.
> -	 */
> -	sev_clflush_pages(region->pages, region->npages);
> -
>  	sev_unpin_memory(kvm, region->pages, region->npages);
>  	list_del(&region->list);
>  	kfree(region);
> @@ -2004,6 +1996,13 @@ static void sev_vm_destroy(struct kvm *kvm)
>  
>  	mutex_lock(&kvm->lock);
>  
> +	/*
> +	 * Ensure that all guest tagged cache entries are flushed before
> +	 * releasing the pages back to the system for use. CLFLUSH will
> +	 * not do this, so issue a WBINVD.
> +	 */
> +	wbinvd_on_all_cpus();
> +
>  	/*
>  	 * if userspace was terminated before unregistering the memory regions
>  	 * then lets unpin all the registered memory.
> @@ -7247,6 +7246,13 @@ static int svm_unregister_enc_region(struct kvm *kvm,
>  		goto failed;
>  	}
>  
> +	/*
> +	 * Ensure that all guest tagged cache entries are flushed before
> +	 * releasing the pages back to the system for use. CLFLUSH will
> +	 * not do this, so issue a WBINVD.
> +	 */
> +	wbinvd_on_all_cpus();
> +
>  	__unregister_enc_region_locked(kvm, region);
>  
>  	mutex_unlock(&kvm->lock);
> 

Queued for kvm/master, thanks.

Paolo


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] KVM: SVM: Issue WBINVD after deactivating an SEV guest
  2020-03-20 16:07 [PATCH] KVM: SVM: Issue WBINVD after deactivating an SEV guest Tom Lendacky
  2020-03-20 17:43 ` Paolo Bonzini
@ 2020-03-20 20:34 ` David Rientjes
  2020-03-20 20:37   ` Tom Lendacky
  1 sibling, 1 reply; 7+ messages in thread
From: David Rientjes @ 2020-03-20 20:34 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: kvm, linux-kernel, Paolo Bonzini, Sean Christopherson,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Brijesh Singh

On Fri, 20 Mar 2020, Tom Lendacky wrote:

> Currently, CLFLUSH is used to flush SEV guest memory before the guest is
> terminated (or a memory hotplug region is removed). However, CLFLUSH is
> not enough to ensure that SEV guest tagged data is flushed from the cache.
> 
> With 33af3a7ef9e6 ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations"), the
> original WBINVD was removed. This then exposed crashes at random times
> because of a cache flush race with a page that had both a hypervisor and
> a guest tag in the cache.
> 
> Restore the WBINVD when destroying an SEV guest and add a WBINVD to the
> svm_unregister_enc_region() function to ensure hotplug memory is flushed
> when removed. The DF_FLUSH can still be avoided at this point.
> 
> Fixes: 33af3a7ef9e6 ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations")
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>

Acked-by: David Rientjes <rientjes@google.com>

Should this be marked for stable?

Cc: stable@vger.kernel.org # 5.5+

> ---
>  arch/x86/kvm/svm.c | 22 ++++++++++++++--------
>  1 file changed, 14 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 08568ae9f7a1..d54cdca9c140 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1980,14 +1980,6 @@ static void sev_clflush_pages(struct page *pages[], unsigned long npages)
>  static void __unregister_enc_region_locked(struct kvm *kvm,
>  					   struct enc_region *region)
>  {
> -	/*
> -	 * The guest may change the memory encryption attribute from C=0 -> C=1
> -	 * or vice versa for this memory range. Lets make sure caches are
> -	 * flushed to ensure that guest data gets written into memory with
> -	 * correct C-bit.
> -	 */
> -	sev_clflush_pages(region->pages, region->npages);
> -
>  	sev_unpin_memory(kvm, region->pages, region->npages);
>  	list_del(&region->list);
>  	kfree(region);
> @@ -2004,6 +1996,13 @@ static void sev_vm_destroy(struct kvm *kvm)
>  
>  	mutex_lock(&kvm->lock);
>  
> +	/*
> +	 * Ensure that all guest tagged cache entries are flushed before
> +	 * releasing the pages back to the system for use. CLFLUSH will
> +	 * not do this, so issue a WBINVD.
> +	 */
> +	wbinvd_on_all_cpus();
> +
>  	/*
>  	 * if userspace was terminated before unregistering the memory regions
>  	 * then lets unpin all the registered memory.
> @@ -7247,6 +7246,13 @@ static int svm_unregister_enc_region(struct kvm *kvm,
>  		goto failed;
>  	}
>  
> +	/*
> +	 * Ensure that all guest tagged cache entries are flushed before
> +	 * releasing the pages back to the system for use. CLFLUSH will
> +	 * not do this, so issue a WBINVD.
> +	 */
> +	wbinvd_on_all_cpus();
> +
>  	__unregister_enc_region_locked(kvm, region);
>  
>  	mutex_unlock(&kvm->lock);
> -- 
> 2.17.1
> 
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] KVM: SVM: Issue WBINVD after deactivating an SEV guest
  2020-03-20 20:34 ` David Rientjes
@ 2020-03-20 20:37   ` Tom Lendacky
  2020-03-21  9:00     ` Greg KH
  0 siblings, 1 reply; 7+ messages in thread
From: Tom Lendacky @ 2020-03-20 20:37 UTC (permalink / raw)
  To: David Rientjes
  Cc: kvm, linux-kernel, Paolo Bonzini, Sean Christopherson,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Brijesh Singh

On 3/20/20 3:34 PM, David Rientjes wrote:
> On Fri, 20 Mar 2020, Tom Lendacky wrote:
> 
>> Currently, CLFLUSH is used to flush SEV guest memory before the guest is
>> terminated (or a memory hotplug region is removed). However, CLFLUSH is
>> not enough to ensure that SEV guest tagged data is flushed from the cache.
>>
>> With 33af3a7ef9e6 ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations"), the
>> original WBINVD was removed. This then exposed crashes at random times
>> because of a cache flush race with a page that had both a hypervisor and
>> a guest tag in the cache.
>>
>> Restore the WBINVD when destroying an SEV guest and add a WBINVD to the
>> svm_unregister_enc_region() function to ensure hotplug memory is flushed
>> when removed. The DF_FLUSH can still be avoided at this point.
>>
>> Fixes: 33af3a7ef9e6 ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations")
>> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> 
> Acked-by: David Rientjes <rientjes@google.com>
> 
> Should this be marked for stable?

The Fixes tag should take care of that.

Thanks,
Tom

> 
> Cc: stable@vger.kernel.org # 5.5+
> 
>> ---
>>   arch/x86/kvm/svm.c | 22 ++++++++++++++--------
>>   1 file changed, 14 insertions(+), 8 deletions(-)
>>
>> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
>> index 08568ae9f7a1..d54cdca9c140 100644
>> --- a/arch/x86/kvm/svm.c
>> +++ b/arch/x86/kvm/svm.c
>> @@ -1980,14 +1980,6 @@ static void sev_clflush_pages(struct page *pages[], unsigned long npages)
>>   static void __unregister_enc_region_locked(struct kvm *kvm,
>>   					   struct enc_region *region)
>>   {
>> -	/*
>> -	 * The guest may change the memory encryption attribute from C=0 -> C=1
>> -	 * or vice versa for this memory range. Lets make sure caches are
>> -	 * flushed to ensure that guest data gets written into memory with
>> -	 * correct C-bit.
>> -	 */
>> -	sev_clflush_pages(region->pages, region->npages);
>> -
>>   	sev_unpin_memory(kvm, region->pages, region->npages);
>>   	list_del(&region->list);
>>   	kfree(region);
>> @@ -2004,6 +1996,13 @@ static void sev_vm_destroy(struct kvm *kvm)
>>   
>>   	mutex_lock(&kvm->lock);
>>   
>> +	/*
>> +	 * Ensure that all guest tagged cache entries are flushed before
>> +	 * releasing the pages back to the system for use. CLFLUSH will
>> +	 * not do this, so issue a WBINVD.
>> +	 */
>> +	wbinvd_on_all_cpus();
>> +
>>   	/*
>>   	 * if userspace was terminated before unregistering the memory regions
>>   	 * then lets unpin all the registered memory.
>> @@ -7247,6 +7246,13 @@ static int svm_unregister_enc_region(struct kvm *kvm,
>>   		goto failed;
>>   	}
>>   
>> +	/*
>> +	 * Ensure that all guest tagged cache entries are flushed before
>> +	 * releasing the pages back to the system for use. CLFLUSH will
>> +	 * not do this, so issue a WBINVD.
>> +	 */
>> +	wbinvd_on_all_cpus();
>> +
>>   	__unregister_enc_region_locked(kvm, region);
>>   
>>   	mutex_unlock(&kvm->lock);
>> -- 
>> 2.17.1
>>
>>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] KVM: SVM: Issue WBINVD after deactivating an SEV guest
  2020-03-20 20:37   ` Tom Lendacky
@ 2020-03-21  9:00     ` Greg KH
  2020-03-21 12:16       ` Tom Lendacky
  0 siblings, 1 reply; 7+ messages in thread
From: Greg KH @ 2020-03-21  9:00 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: David Rientjes, kvm, linux-kernel, Paolo Bonzini,
	Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Brijesh Singh

On Fri, Mar 20, 2020 at 03:37:23PM -0500, Tom Lendacky wrote:
> On 3/20/20 3:34 PM, David Rientjes wrote:
> > On Fri, 20 Mar 2020, Tom Lendacky wrote:
> > 
> > > Currently, CLFLUSH is used to flush SEV guest memory before the guest is
> > > terminated (or a memory hotplug region is removed). However, CLFLUSH is
> > > not enough to ensure that SEV guest tagged data is flushed from the cache.
> > > 
> > > With 33af3a7ef9e6 ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations"), the
> > > original WBINVD was removed. This then exposed crashes at random times
> > > because of a cache flush race with a page that had both a hypervisor and
> > > a guest tag in the cache.
> > > 
> > > Restore the WBINVD when destroying an SEV guest and add a WBINVD to the
> > > svm_unregister_enc_region() function to ensure hotplug memory is flushed
> > > when removed. The DF_FLUSH can still be avoided at this point.
> > > 
> > > Fixes: 33af3a7ef9e6 ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations")
> > > Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> > 
> > Acked-by: David Rientjes <rientjes@google.com>
> > 
> > Should this be marked for stable?
> 
> The Fixes tag should take care of that.

No it does not.
Please read:
    https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.

Yes, I have had to go around and clean up after maintainers who don't
seem to realize this, but for KVM patches I have been explicitly told to
NOT take any patch unless it has a cc: stable on it, due to issues that
have happened in the past.

So for this subsystem, what you suggested guaranteed it would NOT get
picked up, please do not do that.

greg k-h

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] KVM: SVM: Issue WBINVD after deactivating an SEV guest
  2020-03-21  9:00     ` Greg KH
@ 2020-03-21 12:16       ` Tom Lendacky
  2020-03-21 17:06         ` Paolo Bonzini
  0 siblings, 1 reply; 7+ messages in thread
From: Tom Lendacky @ 2020-03-21 12:16 UTC (permalink / raw)
  To: Greg KH
  Cc: David Rientjes, kvm, linux-kernel, Paolo Bonzini,
	Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Brijesh Singh

On 3/21/20 4:00 AM, Greg KH wrote:
> On Fri, Mar 20, 2020 at 03:37:23PM -0500, Tom Lendacky wrote:
>> On 3/20/20 3:34 PM, David Rientjes wrote:
>>> On Fri, 20 Mar 2020, Tom Lendacky wrote:
>>>
>>>> Currently, CLFLUSH is used to flush SEV guest memory before the guest is
>>>> terminated (or a memory hotplug region is removed). However, CLFLUSH is
>>>> not enough to ensure that SEV guest tagged data is flushed from the cache.
>>>>
>>>> With 33af3a7ef9e6 ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations"), the
>>>> original WBINVD was removed. This then exposed crashes at random times
>>>> because of a cache flush race with a page that had both a hypervisor and
>>>> a guest tag in the cache.
>>>>
>>>> Restore the WBINVD when destroying an SEV guest and add a WBINVD to the
>>>> svm_unregister_enc_region() function to ensure hotplug memory is flushed
>>>> when removed. The DF_FLUSH can still be avoided at this point.
>>>>
>>>> Fixes: 33af3a7ef9e6 ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations")
>>>> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
>>>
>>> Acked-by: David Rientjes <rientjes@google.com>
>>>
>>> Should this be marked for stable?
>>
>> The Fixes tag should take care of that.
> 
> No it does not.
> Please read:
>      https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.kernel.org%2Fdoc%2Fhtml%2Flatest%2Fprocess%2Fstable-kernel-rules.html&amp;data=02%7C01%7Cthomas.lendacky%40amd.com%7C197f666080144732040108d7cd765107%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637203780365719535&amp;sdata=NKgNt6Hd7y6BGBdpI52ckCxZvIsCRuEf9FJ7GW2PqPw%3D&amp;reserved=0
> for how to do this properly.
> 
> Yes, I have had to go around and clean up after maintainers who don't
> seem to realize this, but for KVM patches I have been explicitly told to
> NOT take any patch unless it has a cc: stable on it, due to issues that
> have happened in the past.
> 
> So for this subsystem, what you suggested guaranteed it would NOT get
> picked up, please do not do that.

Thanks for clarifying that, Greg.

Then, yes, it should have the Cc: to stable that David mentioned. If it
gets applied without that, I'll follow the process to send an email to
stable to get it included in 5.5-stable.

Thanks,
Tom

> 
> greg k-h
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] KVM: SVM: Issue WBINVD after deactivating an SEV guest
  2020-03-21 12:16       ` Tom Lendacky
@ 2020-03-21 17:06         ` Paolo Bonzini
  0 siblings, 0 replies; 7+ messages in thread
From: Paolo Bonzini @ 2020-03-21 17:06 UTC (permalink / raw)
  To: Tom Lendacky, Greg KH
  Cc: David Rientjes, kvm, linux-kernel, Sean Christopherson,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Brijesh Singh

On 21/03/20 13:16, Tom Lendacky wrote:
>>
>> Yes, I have had to go around and clean up after maintainers who don't
>> seem to realize this, but for KVM patches I have been explicitly told to
>> NOT take any patch unless it has a cc: stable on it, due to issues that
>> have happened in the past.
>>
>> So for this subsystem, what you suggested guaranteed it would NOT get
>> picked up, please do not do that.
> 
> Thanks for clarifying that, Greg.
> 
> Then, yes, it should have the Cc: to stable that David mentioned. If it
> gets applied without that, I'll follow the process to send an email to
> stable to get it included in 5.5-stable.

No, don't worry, I do add the stable tags myself based on the Fixes tags.

Paolo


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-03-21 17:06 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-20 16:07 [PATCH] KVM: SVM: Issue WBINVD after deactivating an SEV guest Tom Lendacky
2020-03-20 17:43 ` Paolo Bonzini
2020-03-20 20:34 ` David Rientjes
2020-03-20 20:37   ` Tom Lendacky
2020-03-21  9:00     ` Greg KH
2020-03-21 12:16       ` Tom Lendacky
2020-03-21 17:06         ` Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.