linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] dma-direct: set_memory_{en,de}crypted() take number of pages
@ 2019-01-22 21:17 Thiago Jung Bauermann
  2019-01-22 21:17 ` [PATCH 2/2] x86/kvmclock: set_memory_decrypted() takes " Thiago Jung Bauermann
  2019-01-22 21:40 ` [PATCH 1/2] dma-direct: set_memory_{en,de}crypted() take " Lendacky, Thomas
  0 siblings, 2 replies; 5+ messages in thread
From: Thiago Jung Bauermann @ 2019-01-22 21:17 UTC (permalink / raw)
  To: x86
  Cc: kvm, iommu, linux-kernel, Christoph Hellwig, Marek Szyprowski,
	Robin Murphy, Paolo Bonzini, Radim Krčmář,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Tom Lendacky, Ram Pai, Thiago Jung Bauermann

From: Ram Pai <linuxram@us.ibm.com>

set_memory_encrypted() and set_memory_decrypted() expect the number of
PAGE_SIZE pages to encrypt or decrypt. dma_direct_alloc() and
dma_direct_free() instead pass number of bytes. This encrypts/decrypts a
huge number of pages resulting in data corruption.

Fixed it.

[ bauermann: Slightly reworded commit message and added Fixes: tag. ]
Fixes: d7b417fa08d1 ("x86/mm: Add DMA support for SEV memory encryption")
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 kernel/dma/direct.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

Notes:

1. This was tested on powerpc with patches adding support for running
   under the ultravisor, which are not yet upstream.

2. The lines changed in this patch were added by commit c10f07aa27da
   ("dma/direct: Handle force decryption for DMA coherent buffers in
   common code"), but it only moves the code from an x86-specific file.
   Therefore the Fixes tag references the commit that first introduced
   the code.

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 355d16acee6d..bc78c37220ba 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -166,7 +166,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
 
 	ret = page_address(page);
 	if (force_dma_unencrypted()) {
-		set_memory_decrypted((unsigned long)ret, 1 << get_order(size));
+		set_memory_decrypted((unsigned long)ret, 1);
 		*dma_handle = __phys_to_dma(dev, page_to_phys(page));
 	} else {
 		*dma_handle = phys_to_dma(dev, page_to_phys(page));
@@ -186,10 +186,8 @@ void __dma_direct_free_pages(struct device *dev, size_t size, struct page *page)
 void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr,
 		dma_addr_t dma_addr, unsigned long attrs)
 {
-	unsigned int page_order = get_order(size);
-
 	if (force_dma_unencrypted())
-		set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order);
+		set_memory_encrypted((unsigned long)cpu_addr, 1);
 	__dma_direct_free_pages(dev, size, virt_to_page(cpu_addr));
 }
 


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/2] x86/kvmclock: set_memory_decrypted() takes number of pages
  2019-01-22 21:17 [PATCH 1/2] dma-direct: set_memory_{en,de}crypted() take number of pages Thiago Jung Bauermann
@ 2019-01-22 21:17 ` Thiago Jung Bauermann
  2019-01-22 21:43   ` Lendacky, Thomas
  2019-01-22 21:40 ` [PATCH 1/2] dma-direct: set_memory_{en,de}crypted() take " Lendacky, Thomas
  1 sibling, 1 reply; 5+ messages in thread
From: Thiago Jung Bauermann @ 2019-01-22 21:17 UTC (permalink / raw)
  To: x86
  Cc: kvm, iommu, linux-kernel, Christoph Hellwig, Marek Szyprowski,
	Robin Murphy, Paolo Bonzini, Radim Krčmář,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Tom Lendacky, Ram Pai, Thiago Jung Bauermann

From: Ram Pai <linuxram@us.ibm.com>

set_memory_decrypted() expects the number of PAGE_SIZE pages to decrypt.
kvmclock_init_mem() instead passes number of bytes. This decrypts a huge
number of pages resulting in data corruption.

Fixed it.

[ bauermann: Slightly reworded commit message and added Fixes: tag. ]
Fixes: 6a1cac56f41f ("x86/kvm: Use __bss_decrypted attribute in shared variables")
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 arch/x86/kernel/kvmclock.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Note: Found by code inspection. I don't have a way to test.

diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index e811d4d1c824..b5c867dd2c8d 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -251,8 +251,7 @@ static void __init kvmclock_init_mem(void)
 	 * be mapped decrypted.
 	 */
 	if (sev_active()) {
-		r = set_memory_decrypted((unsigned long) hvclock_mem,
-					 1UL << order);
+		r = set_memory_decrypted((unsigned long) hvclock_mem, 1);
 		if (r) {
 			__free_pages(p, order);
 			hvclock_mem = NULL;


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] dma-direct: set_memory_{en,de}crypted() take number of pages
  2019-01-22 21:17 [PATCH 1/2] dma-direct: set_memory_{en,de}crypted() take number of pages Thiago Jung Bauermann
  2019-01-22 21:17 ` [PATCH 2/2] x86/kvmclock: set_memory_decrypted() takes " Thiago Jung Bauermann
@ 2019-01-22 21:40 ` Lendacky, Thomas
  2019-01-22 22:38   ` Thiago Jung Bauermann
  1 sibling, 1 reply; 5+ messages in thread
From: Lendacky, Thomas @ 2019-01-22 21:40 UTC (permalink / raw)
  To: Thiago Jung Bauermann, x86
  Cc: kvm, iommu, linux-kernel, Christoph Hellwig, Marek Szyprowski,
	Robin Murphy, Paolo Bonzini, Radim Krčmář,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Ram Pai

On 1/22/19 3:17 PM, Thiago Jung Bauermann wrote:
> From: Ram Pai <linuxram@us.ibm.com>
> 
> set_memory_encrypted() and set_memory_decrypted() expect the number of
> PAGE_SIZE pages to encrypt or decrypt. dma_direct_alloc() and
> dma_direct_free() instead pass number of bytes. This encrypts/decrypts a
> huge number of pages resulting in data corruption.

That is not what it is doing. See comments below.

> 
> Fixed it.
> 
> [ bauermann: Slightly reworded commit message and added Fixes: tag. ]
> Fixes: d7b417fa08d1 ("x86/mm: Add DMA support for SEV memory encryption")
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> ---
>  kernel/dma/direct.c | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> Notes:
> 
> 1. This was tested on powerpc with patches adding support for running
>    under the ultravisor, which are not yet upstream.
> 
> 2. The lines changed in this patch were added by commit c10f07aa27da
>    ("dma/direct: Handle force decryption for DMA coherent buffers in
>    common code"), but it only moves the code from an x86-specific file.
>    Therefore the Fixes tag references the commit that first introduced
>    the code.
> 
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 355d16acee6d..bc78c37220ba 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -166,7 +166,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
>  
>  	ret = page_address(page);
>  	if (force_dma_unencrypted()) {
> -		set_memory_decrypted((unsigned long)ret, 1 << get_order(size));
> +		set_memory_decrypted((unsigned long)ret, 1);

The get_order() function will return the order for the specified size. To
then get the number of pages you perform the shift as is being done. The
change is definitely wrong since you are now hardcoding the page count to
1. The call to __dma_direct_alloc_pages() will allocate more than one page
if the size is greater than a page.

>  		*dma_handle = __phys_to_dma(dev, page_to_phys(page));
>  	} else {
>  		*dma_handle = phys_to_dma(dev, page_to_phys(page));
> @@ -186,10 +186,8 @@ void __dma_direct_free_pages(struct device *dev, size_t size, struct page *page)
>  void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr,
>  		dma_addr_t dma_addr, unsigned long attrs)
>  {
> -	unsigned int page_order = get_order(size);
> -
>  	if (force_dma_unencrypted())
> -		set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order);
> +		set_memory_encrypted((unsigned long)cpu_addr, 1);

Same comment here as above.

Thanks,
Tom

>  	__dma_direct_free_pages(dev, size, virt_to_page(cpu_addr));
>  }
>  
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 2/2] x86/kvmclock: set_memory_decrypted() takes number of pages
  2019-01-22 21:17 ` [PATCH 2/2] x86/kvmclock: set_memory_decrypted() takes " Thiago Jung Bauermann
@ 2019-01-22 21:43   ` Lendacky, Thomas
  0 siblings, 0 replies; 5+ messages in thread
From: Lendacky, Thomas @ 2019-01-22 21:43 UTC (permalink / raw)
  To: Thiago Jung Bauermann, x86
  Cc: kvm, iommu, linux-kernel, Christoph Hellwig, Marek Szyprowski,
	Robin Murphy, Paolo Bonzini, Radim Krčmář,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Ram Pai

On 1/22/19 3:17 PM, Thiago Jung Bauermann wrote:
> From: Ram Pai <linuxram@us.ibm.com>
> 
> set_memory_decrypted() expects the number of PAGE_SIZE pages to decrypt.
> kvmclock_init_mem() instead passes number of bytes. This decrypts a huge
> number of pages resulting in data corruption.

Same comment as patch 1/2 in this series.  This is not correct. See
comments below.

> 
> Fixed it.
> 
> [ bauermann: Slightly reworded commit message and added Fixes: tag. ]
> Fixes: 6a1cac56f41f ("x86/kvm: Use __bss_decrypted attribute in shared variables")
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> ---
>  arch/x86/kernel/kvmclock.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> Note: Found by code inspection. I don't have a way to test.
> 
> diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
> index e811d4d1c824..b5c867dd2c8d 100644
> --- a/arch/x86/kernel/kvmclock.c
> +++ b/arch/x86/kernel/kvmclock.c
> @@ -251,8 +251,7 @@ static void __init kvmclock_init_mem(void)
>  	 * be mapped decrypted.
>  	 */
>  	if (sev_active()) {
> -		r = set_memory_decrypted((unsigned long) hvclock_mem,
> -					 1UL << order);
> +		r = set_memory_decrypted((unsigned long) hvclock_mem, 1);

Again, not correct. A number of pages were allocated based on the order.
That number is calculated using the shift in the call. Hardcoding this to
1 is wrong.

Thanks,
Tom

>  		if (r) {
>  			__free_pages(p, order);
>  			hvclock_mem = NULL;
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] dma-direct: set_memory_{en,de}crypted() take number of pages
  2019-01-22 21:40 ` [PATCH 1/2] dma-direct: set_memory_{en,de}crypted() take " Lendacky, Thomas
@ 2019-01-22 22:38   ` Thiago Jung Bauermann
  0 siblings, 0 replies; 5+ messages in thread
From: Thiago Jung Bauermann @ 2019-01-22 22:38 UTC (permalink / raw)
  To: Lendacky, Thomas
  Cc: x86, kvm, iommu, linux-kernel, Christoph Hellwig,
	Marek Szyprowski, Robin Murphy, Paolo Bonzini,
	Radim Krčmář,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Ram Pai


Lendacky, Thomas <Thomas.Lendacky@amd.com> writes:

> On 1/22/19 3:17 PM, Thiago Jung Bauermann wrote:
>> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
>> index 355d16acee6d..bc78c37220ba 100644
>> --- a/kernel/dma/direct.c
>> +++ b/kernel/dma/direct.c
>> @@ -166,7 +166,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
>>  
>>  	ret = page_address(page);
>>  	if (force_dma_unencrypted()) {
>> -		set_memory_decrypted((unsigned long)ret, 1 << get_order(size));
>> +		set_memory_decrypted((unsigned long)ret, 1);
>
> The get_order() function will return the order for the specified size. To
> then get the number of pages you perform the shift as is being done. The
> change is definitely wrong since you are now hardcoding the page count to
> 1. The call to __dma_direct_alloc_pages() will allocate more than one page
> if the size is greater than a page.

<facepalm>

You are correct, of course. Sorry for the noise and thanks for explaining.

-- 
Thiago Jung Bauermann
IBM Linux Technology Center


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-01-22 22:39 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-22 21:17 [PATCH 1/2] dma-direct: set_memory_{en,de}crypted() take number of pages Thiago Jung Bauermann
2019-01-22 21:17 ` [PATCH 2/2] x86/kvmclock: set_memory_decrypted() takes " Thiago Jung Bauermann
2019-01-22 21:43   ` Lendacky, Thomas
2019-01-22 21:40 ` [PATCH 1/2] dma-direct: set_memory_{en,de}crypted() take " Lendacky, Thomas
2019-01-22 22:38   ` Thiago Jung Bauermann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).