All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH for 5.4] Fix unsynchronized access to sev members through svm_register_enc_region
@ 2021-02-08 16:48 Peter Gonda
  2021-02-08 16:54 ` Paolo Bonzini
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Peter Gonda @ 2021-02-08 16:48 UTC (permalink / raw)
  To: stable
  Cc: Peter Gonda, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Joerg Roedel, Tom Lendacky, Brijesh Singh,
	Sean Christopherson, x86, kvm, linux-kernel

commit 19a23da53932bc8011220bd8c410cb76012de004 upstream.

Grab kvm->lock before pinning memory when registering an encrypted
region; sev_pin_memory() relies on kvm->lock being held to ensure
correctness when checking and updating the number of pinned pages.

Add a lockdep assertion to help prevent future regressions.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: stable@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Fixes: 1e80fdc09d12 ("KVM: SVM: Pin guest memory when SEV is active")
Signed-off-by: Peter Gonda <pgonda@google.com>

V2
 - Fix up patch description
 - Correct file paths svm.c -> sev.c
 - Add unlock of kvm->lock on sev_pin_memory error

V1
 - https://lore.kernel.org/kvm/20210126185431.1824530-1-pgonda@google.com/

Message-Id: <20210127161524.2832400-1-pgonda@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 2b506904be02..93c89f1ffc5d 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1830,6 +1830,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
 	struct page **pages;
 	unsigned long first, last;
 
+	lockdep_assert_held(&kvm->lock);
+
 	if (ulen == 0 || uaddr + ulen < uaddr)
 		return NULL;
 
@@ -7086,12 +7088,21 @@ static int svm_register_enc_region(struct kvm *kvm,
 	if (!region)
 		return -ENOMEM;
 
+	mutex_lock(&kvm->lock);
 	region->pages = sev_pin_memory(kvm, range->addr, range->size, &region->npages, 1);
 	if (!region->pages) {
 		ret = -ENOMEM;
+		mutex_unlock(&kvm->lock);
 		goto e_free;
 	}
 
+	region->uaddr = range->addr;
+	region->size = range->size;
+
+	mutex_lock(&kvm->lock);
+	list_add_tail(&region->list, &sev->regions_list);
+	mutex_unlock(&kvm->lock);
+
 	/*
 	 * The guest may change the memory encryption attribute from C=0 -> C=1
 	 * or vice versa for this memory range. Lets make sure caches are
@@ -7100,13 +7111,6 @@ static int svm_register_enc_region(struct kvm *kvm,
 	 */
 	sev_clflush_pages(region->pages, region->npages);
 
-	region->uaddr = range->addr;
-	region->size = range->size;
-
-	mutex_lock(&kvm->lock);
-	list_add_tail(&region->list, &sev->regions_list);
-	mutex_unlock(&kvm->lock);
-
 	return ret;
 
 e_free:
-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH for 5.4] Fix unsynchronized access to sev members through svm_register_enc_region
  2021-02-08 16:48 [PATCH for 5.4] Fix unsynchronized access to sev members through svm_register_enc_region Peter Gonda
@ 2021-02-08 16:54 ` Paolo Bonzini
  2021-02-11 14:19 ` Greg KH
  2021-02-17  9:18 ` Dov Murik
  2 siblings, 0 replies; 5+ messages in thread
From: Paolo Bonzini @ 2021-02-08 16:54 UTC (permalink / raw)
  To: Peter Gonda, stable
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Joerg Roedel,
	Tom Lendacky, Brijesh Singh, Sean Christopherson, x86, kvm,
	linux-kernel

On 08/02/21 17:48, Peter Gonda wrote:
> commit 19a23da53932bc8011220bd8c410cb76012de004 upstream.
> 
> Grab kvm->lock before pinning memory when registering an encrypted
> region; sev_pin_memory() relies on kvm->lock being held to ensure
> correctness when checking and updating the number of pinned pages.
> 
> Add a lockdep assertion to help prevent future regressions.
> 
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: Brijesh Singh <brijesh.singh@amd.com>
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: stable@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Fixes: 1e80fdc09d12 ("KVM: SVM: Pin guest memory when SEV is active")
> Signed-off-by: Peter Gonda <pgonda@google.com>
> 
> V2
>   - Fix up patch description
>   - Correct file paths svm.c -> sev.c
>   - Add unlock of kvm->lock on sev_pin_memory error
> 
> V1
>   - https://lore.kernel.org/kvm/20210126185431.1824530-1-pgonda@google.com/
> 
> Message-Id: <20210127161524.2832400-1-pgonda@google.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>   arch/x86/kvm/svm.c | 18 +++++++++++-------
>   1 file changed, 11 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 2b506904be02..93c89f1ffc5d 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1830,6 +1830,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
>   	struct page **pages;
>   	unsigned long first, last;
>   
> +	lockdep_assert_held(&kvm->lock);
> +
>   	if (ulen == 0 || uaddr + ulen < uaddr)
>   		return NULL;
>   
> @@ -7086,12 +7088,21 @@ static int svm_register_enc_region(struct kvm *kvm,
>   	if (!region)
>   		return -ENOMEM;
>   
> +	mutex_lock(&kvm->lock);
>   	region->pages = sev_pin_memory(kvm, range->addr, range->size, &region->npages, 1);
>   	if (!region->pages) {
>   		ret = -ENOMEM;
> +		mutex_unlock(&kvm->lock);
>   		goto e_free;
>   	}
>   
> +	region->uaddr = range->addr;
> +	region->size = range->size;
> +
> +	mutex_lock(&kvm->lock);
> +	list_add_tail(&region->list, &sev->regions_list);
> +	mutex_unlock(&kvm->lock);
> +
>   	/*
>   	 * The guest may change the memory encryption attribute from C=0 -> C=1
>   	 * or vice versa for this memory range. Lets make sure caches are
> @@ -7100,13 +7111,6 @@ static int svm_register_enc_region(struct kvm *kvm,
>   	 */
>   	sev_clflush_pages(region->pages, region->npages);
>   
> -	region->uaddr = range->addr;
> -	region->size = range->size;
> -
> -	mutex_lock(&kvm->lock);
> -	list_add_tail(&region->list, &sev->regions_list);
> -	mutex_unlock(&kvm->lock);
> -
>   	return ret;
>   
>   e_free:
> 

Acked-by: Paolo Bonzini <pbonzini@redhat.com>


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH for 5.4] Fix unsynchronized access to sev members through svm_register_enc_region
  2021-02-08 16:48 [PATCH for 5.4] Fix unsynchronized access to sev members through svm_register_enc_region Peter Gonda
  2021-02-08 16:54 ` Paolo Bonzini
@ 2021-02-11 14:19 ` Greg KH
  2021-02-17  9:18 ` Dov Murik
  2 siblings, 0 replies; 5+ messages in thread
From: Greg KH @ 2021-02-11 14:19 UTC (permalink / raw)
  To: Peter Gonda
  Cc: stable, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Joerg Roedel, Tom Lendacky, Brijesh Singh,
	Sean Christopherson, x86, kvm, linux-kernel

On Mon, Feb 08, 2021 at 08:48:55AM -0800, Peter Gonda wrote:
> commit 19a23da53932bc8011220bd8c410cb76012de004 upstream.
> 
> Grab kvm->lock before pinning memory when registering an encrypted
> region; sev_pin_memory() relies on kvm->lock being held to ensure
> correctness when checking and updating the number of pinned pages.
> 
> Add a lockdep assertion to help prevent future regressions.
> 
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: Brijesh Singh <brijesh.singh@amd.com>
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: stable@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Fixes: 1e80fdc09d12 ("KVM: SVM: Pin guest memory when SEV is active")
> Signed-off-by: Peter Gonda <pgonda@google.com>
> 
> V2
>  - Fix up patch description
>  - Correct file paths svm.c -> sev.c
>  - Add unlock of kvm->lock on sev_pin_memory error
> 
> V1
>  - https://lore.kernel.org/kvm/20210126185431.1824530-1-pgonda@google.com/
> 
> Message-Id: <20210127161524.2832400-1-pgonda@google.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/svm.c | 18 +++++++++++-------
>  1 file changed, 11 insertions(+), 7 deletions(-)a

Both backports now queued up, thanks.

greg k-h

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH for 5.4] Fix unsynchronized access to sev members through svm_register_enc_region
  2021-02-08 16:48 [PATCH for 5.4] Fix unsynchronized access to sev members through svm_register_enc_region Peter Gonda
  2021-02-08 16:54 ` Paolo Bonzini
  2021-02-11 14:19 ` Greg KH
@ 2021-02-17  9:18 ` Dov Murik
  2021-02-17 12:39   ` Paolo Bonzini
  2 siblings, 1 reply; 5+ messages in thread
From: Dov Murik @ 2021-02-17  9:18 UTC (permalink / raw)
  To: Peter Gonda, stable
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Paolo Bonzini,
	Joerg Roedel, Tom Lendacky, Brijesh Singh, Sean Christopherson,
	x86, kvm, linux-kernel

Hi Peter,

On 08/02/2021 18:48, Peter Gonda wrote:
> commit 19a23da53932bc8011220bd8c410cb76012de004 upstream.
> 
> Grab kvm->lock before pinning memory when registering an encrypted
> region; sev_pin_memory() relies on kvm->lock being held to ensure
> correctness when checking and updating the number of pinned pages.
> 
> Add a lockdep assertion to help prevent future regressions.
> 
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: Brijesh Singh <brijesh.singh@amd.com>
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: stable@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Fixes: 1e80fdc09d12 ("KVM: SVM: Pin guest memory when SEV is active")
> Signed-off-by: Peter Gonda <pgonda@google.com>
> 
> V2
>  - Fix up patch description
>  - Correct file paths svm.c -> sev.c
>  - Add unlock of kvm->lock on sev_pin_memory error
> 
> V1
>  - https://lore.kernel.org/kvm/20210126185431.1824530-1-pgonda@google.com/
> 
> Message-Id: <20210127161524.2832400-1-pgonda@google.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/svm.c | 18 +++++++++++-------
>  1 file changed, 11 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 2b506904be02..93c89f1ffc5d 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1830,6 +1830,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
>  	struct page **pages;
>  	unsigned long first, last;
> 
> +	lockdep_assert_held(&kvm->lock);
> +
>  	if (ulen == 0 || uaddr + ulen < uaddr)
>  		return NULL;
> 
> @@ -7086,12 +7088,21 @@ static int svm_register_enc_region(struct kvm *kvm,
>  	if (!region)
>  		return -ENOMEM;
> 
> +	mutex_lock(&kvm->lock);
>  	region->pages = sev_pin_memory(kvm, range->addr, range->size, &region->npages, 1);
>  	if (!region->pages) {
>  		ret = -ENOMEM;
> +		mutex_unlock(&kvm->lock);
>  		goto e_free;
>  	}
> 
> +	region->uaddr = range->addr;
> +	region->size = range->size;
> +
> +	mutex_lock(&kvm->lock);

This extra mutex_lock call doesn't appear in the upstream patch (committed 
as 19a23da5393), but does appear in the 5.4 and 4.19 backports.  Is it
needed here?

-Dov


> +	list_add_tail(&region->list, &sev->regions_list);
> +	mutex_unlock(&kvm->lock);
> +
>  	/*
>  	 * The guest may change the memory encryption attribute from C=0 -> C=1
>  	 * or vice versa for this memory range. Lets make sure caches are
> @@ -7100,13 +7111,6 @@ static int svm_register_enc_region(struct kvm *kvm,
>  	 */
>  	sev_clflush_pages(region->pages, region->npages);
> 
> -	region->uaddr = range->addr;
> -	region->size = range->size;
> -
> -	mutex_lock(&kvm->lock);
> -	list_add_tail(&region->list, &sev->regions_list);
> -	mutex_unlock(&kvm->lock);
> -
>  	return ret;
> 
>  e_free:
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH for 5.4] Fix unsynchronized access to sev members through svm_register_enc_region
  2021-02-17  9:18 ` Dov Murik
@ 2021-02-17 12:39   ` Paolo Bonzini
  0 siblings, 0 replies; 5+ messages in thread
From: Paolo Bonzini @ 2021-02-17 12:39 UTC (permalink / raw)
  To: Dov Murik, Peter Gonda, stable
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Joerg Roedel,
	Tom Lendacky, Brijesh Singh, Sean Christopherson, x86, kvm,
	linux-kernel

On 17/02/21 10:18, Dov Murik wrote:
> Hi Peter,
> 
> On 08/02/2021 18:48, Peter Gonda wrote:
>> commit 19a23da53932bc8011220bd8c410cb76012de004 upstream.
>>
>> Grab kvm->lock before pinning memory when registering an encrypted
>> region; sev_pin_memory() relies on kvm->lock being held to ensure
>> correctness when checking and updating the number of pinned pages.
>>
>> Add a lockdep assertion to help prevent future regressions.
>>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: "H. Peter Anvin" <hpa@zytor.com>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Joerg Roedel <joro@8bytes.org>
>> Cc: Tom Lendacky <thomas.lendacky@amd.com>
>> Cc: Brijesh Singh <brijesh.singh@amd.com>
>> Cc: Sean Christopherson <seanjc@google.com>
>> Cc: x86@kernel.org
>> Cc: kvm@vger.kernel.org
>> Cc: stable@vger.kernel.org
>> Cc: linux-kernel@vger.kernel.org
>> Fixes: 1e80fdc09d12 ("KVM: SVM: Pin guest memory when SEV is active")
>> Signed-off-by: Peter Gonda <pgonda@google.com>
>>
>> V2
>>   - Fix up patch description
>>   - Correct file paths svm.c -> sev.c
>>   - Add unlock of kvm->lock on sev_pin_memory error
>>
>> V1
>>   - https://lore.kernel.org/kvm/20210126185431.1824530-1-pgonda@google.com/
>>
>> Message-Id: <20210127161524.2832400-1-pgonda@google.com>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
>>   arch/x86/kvm/svm.c | 18 +++++++++++-------
>>   1 file changed, 11 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
>> index 2b506904be02..93c89f1ffc5d 100644
>> --- a/arch/x86/kvm/svm.c
>> +++ b/arch/x86/kvm/svm.c
>> @@ -1830,6 +1830,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
>>   	struct page **pages;
>>   	unsigned long first, last;
>>
>> +	lockdep_assert_held(&kvm->lock);
>> +
>>   	if (ulen == 0 || uaddr + ulen < uaddr)
>>   		return NULL;
>>
>> @@ -7086,12 +7088,21 @@ static int svm_register_enc_region(struct kvm *kvm,
>>   	if (!region)
>>   		return -ENOMEM;
>>
>> +	mutex_lock(&kvm->lock);
>>   	region->pages = sev_pin_memory(kvm, range->addr, range->size, &region->npages, 1);
>>   	if (!region->pages) {
>>   		ret = -ENOMEM;
>> +		mutex_unlock(&kvm->lock);
>>   		goto e_free;
>>   	}
>>
>> +	region->uaddr = range->addr;
>> +	region->size = range->size;
>> +
>> +	mutex_lock(&kvm->lock);
> 
> This extra mutex_lock call doesn't appear in the upstream patch (committed
> as 19a23da5393), but does appear in the 5.4 and 4.19 backports.  Is it
> needed here?

Ouch.  No it isn't and it's an insta-deadlock.  Let me send a fix.

Paolo

> -Dov
> 
> 
>> +	list_add_tail(&region->list, &sev->regions_list);
>> +	mutex_unlock(&kvm->lock);
>> +
>>   	/*
>>   	 * The guest may change the memory encryption attribute from C=0 -> C=1
>>   	 * or vice versa for this memory range. Lets make sure caches are
>> @@ -7100,13 +7111,6 @@ static int svm_register_enc_region(struct kvm *kvm,
>>   	 */
>>   	sev_clflush_pages(region->pages, region->npages);
>>
>> -	region->uaddr = range->addr;
>> -	region->size = range->size;
>> -
>> -	mutex_lock(&kvm->lock);
>> -	list_add_tail(&region->list, &sev->regions_list);
>> -	mutex_unlock(&kvm->lock);
>> -
>>   	return ret;
>>
>>   e_free:
>>
> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-02-17 12:41 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-08 16:48 [PATCH for 5.4] Fix unsynchronized access to sev members through svm_register_enc_region Peter Gonda
2021-02-08 16:54 ` Paolo Bonzini
2021-02-11 14:19 ` Greg KH
2021-02-17  9:18 ` Dov Murik
2021-02-17 12:39   ` Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.