kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2] Fix unsynchronized access to sev members through svm_register_enc_region
@ 2021-01-27 16:15 Peter Gonda
  2021-01-27 21:54 ` Sean Christopherson
  2021-01-28 10:15 ` Paolo Bonzini
  0 siblings, 2 replies; 4+ messages in thread
From: Peter Gonda @ 2021-01-27 16:15 UTC (permalink / raw)
  To: kvm
  Cc: Peter Gonda, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Joerg Roedel, Tom Lendacky, Brijesh Singh,
	Sean Christopherson, x86, stable, linux-kernel

Grab kvm->lock before pinning memory when registering an encrypted
region; sev_pin_memory() relies on kvm->lock being held to ensure
correctness when checking and updating the number of pinned pages.

Add a lockdep assertion to help prevent future regressions.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: stable@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Fixes: 1e80fdc09d12 ("KVM: SVM: Pin guest memory when SEV is active")
Signed-off-by: Peter Gonda <pgonda@google.com>

V2
 - Fix up patch description
 - Correct file paths svm.c -> sev.c
 - Add unlock of kvm->lock on sev_pin_memory error

V1
 - https://lore.kernel.org/kvm/20210126185431.1824530-1-pgonda@google.com/

---
 arch/x86/kvm/svm/sev.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index c8ffdbc81709..b80e9bf0a31b 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -342,6 +342,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
 	unsigned long first, last;
 	int ret;
 
+	lockdep_assert_held(&kvm->lock);
+
 	if (ulen == 0 || uaddr + ulen < uaddr)
 		return ERR_PTR(-EINVAL);
 
@@ -1119,12 +1121,20 @@ int svm_register_enc_region(struct kvm *kvm,
 	if (!region)
 		return -ENOMEM;
 
+	mutex_lock(&kvm->lock);
 	region->pages = sev_pin_memory(kvm, range->addr, range->size, &region->npages, 1);
 	if (IS_ERR(region->pages)) {
 		ret = PTR_ERR(region->pages);
+		mutex_unlock(&kvm->lock);
 		goto e_free;
 	}
 
+	region->uaddr = range->addr;
+	region->size = range->size;
+
+	list_add_tail(&region->list, &sev->regions_list);
+	mutex_unlock(&kvm->lock);
+
 	/*
 	 * The guest may change the memory encryption attribute from C=0 -> C=1
 	 * or vice versa for this memory range. Lets make sure caches are
@@ -1133,13 +1143,6 @@ int svm_register_enc_region(struct kvm *kvm,
 	 */
 	sev_clflush_pages(region->pages, region->npages);
 
-	region->uaddr = range->addr;
-	region->size = range->size;
-
-	mutex_lock(&kvm->lock);
-	list_add_tail(&region->list, &sev->regions_list);
-	mutex_unlock(&kvm->lock);
-
 	return ret;
 
 e_free:
-- 
2.30.0.280.ga3ce27912f-goog


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH V2] Fix unsynchronized access to sev members through svm_register_enc_region
  2021-01-27 16:15 [PATCH V2] Fix unsynchronized access to sev members through svm_register_enc_region Peter Gonda
@ 2021-01-27 21:54 ` Sean Christopherson
  2021-01-27 22:51   ` Tom Lendacky
  2021-01-28 10:15 ` Paolo Bonzini
  1 sibling, 1 reply; 4+ messages in thread
From: Sean Christopherson @ 2021-01-27 21:54 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Paolo Bonzini,
	Joerg Roedel, Tom Lendacky, Brijesh Singh, x86, stable,
	linux-kernel

On Wed, Jan 27, 2021, Peter Gonda wrote:
> Grab kvm->lock before pinning memory when registering an encrypted
> region; sev_pin_memory() relies on kvm->lock being held to ensure
> correctness when checking and updating the number of pinned pages.
> 
> Add a lockdep assertion to help prevent future regressions.
> 
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: Brijesh Singh <brijesh.singh@amd.com>
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: stable@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Fixes: 1e80fdc09d12 ("KVM: SVM: Pin guest memory when SEV is active")
> Signed-off-by: Peter Gonda <pgonda@google.com>
> 
> V2
>  - Fix up patch description
>  - Correct file paths svm.c -> sev.c
>  - Add unlock of kvm->lock on sev_pin_memory error
> 
> V1
>  - https://lore.kernel.org/kvm/20210126185431.1824530-1-pgonda@google.com/

Put version info, and anything else that shouldn't be in the final commit, below
the three dashes.  AFAIK that requires manually editing the patch file before
sending it.

> 
> ---

Version info goes here.

>  arch/x86/kvm/svm/sev.c | 17 ++++++++++-------
>  1 file changed, 10 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index c8ffdbc81709..b80e9bf0a31b 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -342,6 +342,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
>  	unsigned long first, last;
>  	int ret;
>  
> +	lockdep_assert_held(&kvm->lock);
> +
>  	if (ulen == 0 || uaddr + ulen < uaddr)
>  		return ERR_PTR(-EINVAL);
>  
> @@ -1119,12 +1121,20 @@ int svm_register_enc_region(struct kvm *kvm,
>  	if (!region)
>  		return -ENOMEM;
>  
> +	mutex_lock(&kvm->lock);
>  	region->pages = sev_pin_memory(kvm, range->addr, range->size, &region->npages, 1);
>  	if (IS_ERR(region->pages)) {
>  		ret = PTR_ERR(region->pages);
> +		mutex_unlock(&kvm->lock);
>  		goto e_free;
>  	}
>  
> +	region->uaddr = range->addr;
> +	region->size = range->size;
> +
> +	list_add_tail(&region->list, &sev->regions_list);
> +	mutex_unlock(&kvm->lock);
> +
>  	/*
>  	 * The guest may change the memory encryption attribute from C=0 -> C=1
>  	 * or vice versa for this memory range. Lets make sure caches are
> @@ -1133,13 +1143,6 @@ int svm_register_enc_region(struct kvm *kvm,
>  	 */
>  	sev_clflush_pages(region->pages, region->npages);

I don't think it actually matters, but it feels like the flush should be done
before adding the region to the list.  That would also make this sequence
consistent with the other flows.

Tom, any thoughts?

>  
> -	region->uaddr = range->addr;
> -	region->size = range->size;
> -
> -	mutex_lock(&kvm->lock);
> -	list_add_tail(&region->list, &sev->regions_list);
> -	mutex_unlock(&kvm->lock);
> -
>  	return ret;
>  
>  e_free:
> -- 
> 2.30.0.280.ga3ce27912f-goog
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH V2] Fix unsynchronized access to sev members through svm_register_enc_region
  2021-01-27 21:54 ` Sean Christopherson
@ 2021-01-27 22:51   ` Tom Lendacky
  0 siblings, 0 replies; 4+ messages in thread
From: Tom Lendacky @ 2021-01-27 22:51 UTC (permalink / raw)
  To: Sean Christopherson, Peter Gonda
  Cc: kvm, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Paolo Bonzini,
	Joerg Roedel, Brijesh Singh, x86, stable, linux-kernel

On 1/27/21 3:54 PM, Sean Christopherson wrote:
> On Wed, Jan 27, 2021, Peter Gonda wrote:
>> Grab kvm->lock before pinning memory when registering an encrypted
>> region; sev_pin_memory() relies on kvm->lock being held to ensure
>> correctness when checking and updating the number of pinned pages.
>>
...
>> +
>> +	list_add_tail(&region->list, &sev->regions_list);
>> +	mutex_unlock(&kvm->lock);
>> +
>>   	/*
>>   	 * The guest may change the memory encryption attribute from C=0 -> C=1
>>   	 * or vice versa for this memory range. Lets make sure caches are
>> @@ -1133,13 +1143,6 @@ int svm_register_enc_region(struct kvm *kvm,
>>   	 */
>>   	sev_clflush_pages(region->pages, region->npages);
> 
> I don't think it actually matters, but it feels like the flush should be done
> before adding the region to the list.  That would also make this sequence
> consistent with the other flows.
> 
> Tom, any thoughts?

I don't think it matters, either. This does keep the flushing outside of 
the mutex, so if you are doing parallel operations, that should help speed 
things up a bit.

Thanks,
Tom

> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH V2] Fix unsynchronized access to sev members through svm_register_enc_region
  2021-01-27 16:15 [PATCH V2] Fix unsynchronized access to sev members through svm_register_enc_region Peter Gonda
  2021-01-27 21:54 ` Sean Christopherson
@ 2021-01-28 10:15 ` Paolo Bonzini
  1 sibling, 0 replies; 4+ messages in thread
From: Paolo Bonzini @ 2021-01-28 10:15 UTC (permalink / raw)
  To: Peter Gonda, kvm
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Joerg Roedel,
	Tom Lendacky, Brijesh Singh, Sean Christopherson, x86, stable,
	linux-kernel

On 27/01/21 17:15, Peter Gonda wrote:
> Grab kvm->lock before pinning memory when registering an encrypted
> region; sev_pin_memory() relies on kvm->lock being held to ensure
> correctness when checking and updating the number of pinned pages.
> 
> Add a lockdep assertion to help prevent future regressions.
> 
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: Brijesh Singh <brijesh.singh@amd.com>
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: stable@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Fixes: 1e80fdc09d12 ("KVM: SVM: Pin guest memory when SEV is active")
> Signed-off-by: Peter Gonda <pgonda@google.com>
> 
> V2
>   - Fix up patch description
>   - Correct file paths svm.c -> sev.c
>   - Add unlock of kvm->lock on sev_pin_memory error
> 
> V1
>   - https://lore.kernel.org/kvm/20210126185431.1824530-1-pgonda@google.com/
> 
> ---
>   arch/x86/kvm/svm/sev.c | 17 ++++++++++-------
>   1 file changed, 10 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index c8ffdbc81709..b80e9bf0a31b 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -342,6 +342,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
>   	unsigned long first, last;
>   	int ret;
>   
> +	lockdep_assert_held(&kvm->lock);
> +
>   	if (ulen == 0 || uaddr + ulen < uaddr)
>   		return ERR_PTR(-EINVAL);
>   
> @@ -1119,12 +1121,20 @@ int svm_register_enc_region(struct kvm *kvm,
>   	if (!region)
>   		return -ENOMEM;
>   
> +	mutex_lock(&kvm->lock);
>   	region->pages = sev_pin_memory(kvm, range->addr, range->size, &region->npages, 1);
>   	if (IS_ERR(region->pages)) {
>   		ret = PTR_ERR(region->pages);
> +		mutex_unlock(&kvm->lock);
>   		goto e_free;
>   	}
>   
> +	region->uaddr = range->addr;
> +	region->size = range->size;
> +
> +	list_add_tail(&region->list, &sev->regions_list);
> +	mutex_unlock(&kvm->lock);
> +
>   	/*
>   	 * The guest may change the memory encryption attribute from C=0 -> C=1
>   	 * or vice versa for this memory range. Lets make sure caches are
> @@ -1133,13 +1143,6 @@ int svm_register_enc_region(struct kvm *kvm,
>   	 */
>   	sev_clflush_pages(region->pages, region->npages);
>   
> -	region->uaddr = range->addr;
> -	region->size = range->size;
> -
> -	mutex_lock(&kvm->lock);
> -	list_add_tail(&region->list, &sev->regions_list);
> -	mutex_unlock(&kvm->lock);
> -
>   	return ret;
>   
>   e_free:
> 

Queued, thanks.

Paolo


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-01-28 10:16 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-27 16:15 [PATCH V2] Fix unsynchronized access to sev members through svm_register_enc_region Peter Gonda
2021-01-27 21:54 ` Sean Christopherson
2021-01-27 22:51   ` Tom Lendacky
2021-01-28 10:15 ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).