linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch for-5.8 0/4] dma-direct: dma_direct_alloc_pages() fixes for AMD SEV
@ 2020-06-11 19:20 David Rientjes
  2020-06-11 19:20 ` [patch for-5.8 1/4] dma-direct: always align allocation size in dma_direct_alloc_pages() David Rientjes
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: David Rientjes @ 2020-06-11 19:20 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Thomas Lendacky, Brijesh Singh, Marek Szyprowski, Robin Murphy,
	iommu, linux-kernel

While debugging recently reported issues concerning DMA allocation
practices when CONFIG_AMD_MEM_ENCRYPT is enabled, some curiosities arose
when looking at dma_direct_alloc_pages() behavior.

Fix these up.  These are likely all stable material, so proposing for 5.8.
---
 kernel/dma/direct.c | 42 ++++++++++++++++++++++++++++++++----------
 1 file changed, 32 insertions(+), 10 deletions(-)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [patch for-5.8 1/4] dma-direct: always align allocation size in dma_direct_alloc_pages()
  2020-06-11 19:20 [patch for-5.8 0/4] dma-direct: dma_direct_alloc_pages() fixes for AMD SEV David Rientjes
@ 2020-06-11 19:20 ` David Rientjes
  2020-06-15  6:54   ` Christoph Hellwig
  2020-06-11 19:20 ` [patch for-5.8 2/4] dma-direct: re-encrypt memory if dma_direct_alloc_pages() fails David Rientjes
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 8+ messages in thread
From: David Rientjes @ 2020-06-11 19:20 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Thomas Lendacky, Brijesh Singh, Marek Szyprowski, Robin Murphy,
	iommu, linux-kernel

dma_alloc_contiguous() does size >> PAGE_SHIFT and set_memory_decrypted()
works at page granularity.  It's necessary to page align the allocation
size in dma_direct_alloc_pages() for consistent behavior.

This also fixes an issue when arch_dma_prep_coherent() is called on an
unaligned allocation size for dma_alloc_need_uncached() when
CONFIG_DMA_DIRECT_REMAP is disabled but CONFIG_ARCH_HAS_DMA_SET_UNCACHED
is enabled.

Cc: stable@vger.kernel.org
Signed-off-by: David Rientjes <rientjes@google.com>
---
 kernel/dma/direct.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -112,11 +112,12 @@ static inline bool dma_should_free_from_pool(struct device *dev,
 struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 		gfp_t gfp, unsigned long attrs)
 {
-	size_t alloc_size = PAGE_ALIGN(size);
 	int node = dev_to_node(dev);
 	struct page *page = NULL;
 	u64 phys_limit;
 
+	VM_BUG_ON(!PAGE_ALIGNED(size));
+
 	if (attrs & DMA_ATTR_NO_WARN)
 		gfp |= __GFP_NOWARN;
 
@@ -124,14 +125,14 @@ struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 	gfp &= ~__GFP_ZERO;
 	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
 					   &phys_limit);
-	page = dma_alloc_contiguous(dev, alloc_size, gfp);
+	page = dma_alloc_contiguous(dev, size, gfp);
 	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
-		dma_free_contiguous(dev, page, alloc_size);
+		dma_free_contiguous(dev, page, size);
 		page = NULL;
 	}
 again:
 	if (!page)
-		page = alloc_pages_node(node, gfp, get_order(alloc_size));
+		page = alloc_pages_node(node, gfp, get_order(size));
 	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
 		dma_free_contiguous(dev, page, size);
 		page = NULL;
@@ -158,8 +159,10 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
 	struct page *page;
 	void *ret;
 
+	size = PAGE_ALIGN(size);
+
 	if (dma_should_alloc_from_pool(dev, gfp, attrs)) {
-		ret = dma_alloc_from_pool(dev, PAGE_ALIGN(size), &page, gfp);
+		ret = dma_alloc_from_pool(dev, size, &page, gfp);
 		if (!ret)
 			return NULL;
 		goto done;
@@ -183,10 +186,10 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
 	     dma_alloc_need_uncached(dev, attrs)) ||
 	    (IS_ENABLED(CONFIG_DMA_REMAP) && PageHighMem(page))) {
 		/* remove any dirty cache lines on the kernel alias */
-		arch_dma_prep_coherent(page, PAGE_ALIGN(size));
+		arch_dma_prep_coherent(page, size);
 
 		/* create a coherent mapping */
-		ret = dma_common_contiguous_remap(page, PAGE_ALIGN(size),
+		ret = dma_common_contiguous_remap(page, size,
 				dma_pgprot(dev, PAGE_KERNEL, attrs),
 				__builtin_return_address(0));
 		if (!ret)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [patch for-5.8 2/4] dma-direct: re-encrypt memory if dma_direct_alloc_pages() fails
  2020-06-11 19:20 [patch for-5.8 0/4] dma-direct: dma_direct_alloc_pages() fixes for AMD SEV David Rientjes
  2020-06-11 19:20 ` [patch for-5.8 1/4] dma-direct: always align allocation size in dma_direct_alloc_pages() David Rientjes
@ 2020-06-11 19:20 ` David Rientjes
  2020-06-15  6:56   ` Christoph Hellwig
  2020-06-11 19:20 ` [patch for-5.8 3/4] dma-direct: check return value when encrypting or decrypting memory David Rientjes
  2020-06-11 19:20 ` [patch for-5.8 4/4] dma-direct: add missing set_memory_decrypted() for coherent mapping David Rientjes
  3 siblings, 1 reply; 8+ messages in thread
From: David Rientjes @ 2020-06-11 19:20 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Thomas Lendacky, Brijesh Singh, Marek Szyprowski, Robin Murphy,
	iommu, linux-kernel

If arch_dma_set_uncached() fails after memory has been decrypted, it needs
to be re-encrypted before freeing.

Fixes: fa7e2247c572 ("dma-direct: make uncached_kernel_address more
general")
Cc: stable@vger.kernel.org # 5.7
Signed-off-by: David Rientjes <rientjes@google.com>
---
 kernel/dma/direct.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -220,7 +220,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
 		arch_dma_prep_coherent(page, size);
 		ret = arch_dma_set_uncached(ret, size);
 		if (IS_ERR(ret))
-			goto out_free_pages;
+			goto out_encrypt_pages;
 	}
 done:
 	if (force_dma_unencrypted(dev))
@@ -228,6 +228,10 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
 	else
 		*dma_handle = phys_to_dma(dev, page_to_phys(page));
 	return ret;
+out_encrypt_pages:
+	if (force_dma_unencrypted(dev))
+		set_memory_encrypted((unsigned long)page_address(page),
+				     1 << get_order(size));
 out_free_pages:
 	dma_free_contiguous(dev, page, size);
 	return NULL;

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [patch for-5.8 3/4] dma-direct: check return value when encrypting or decrypting memory
  2020-06-11 19:20 [patch for-5.8 0/4] dma-direct: dma_direct_alloc_pages() fixes for AMD SEV David Rientjes
  2020-06-11 19:20 ` [patch for-5.8 1/4] dma-direct: always align allocation size in dma_direct_alloc_pages() David Rientjes
  2020-06-11 19:20 ` [patch for-5.8 2/4] dma-direct: re-encrypt memory if dma_direct_alloc_pages() fails David Rientjes
@ 2020-06-11 19:20 ` David Rientjes
  2020-06-11 19:20 ` [patch for-5.8 4/4] dma-direct: add missing set_memory_decrypted() for coherent mapping David Rientjes
  3 siblings, 0 replies; 8+ messages in thread
From: David Rientjes @ 2020-06-11 19:20 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Thomas Lendacky, Brijesh Singh, Marek Szyprowski, Robin Murphy,
	iommu, linux-kernel

__change_page_attr() can fail which will cause set_memory_encrypted() and
set_memory_decrypted() to return non-zero.

If the device requires unencrypted DMA memory and decryption fails, simply
free the memory and fail.

If attempting to re-encrypt in the failure path and that encryption fails,
there is no alternative other than to leak the memory.

Fixes: c10f07aa27da ("dma/direct: Handle force decryption for DMA coherent
buffers in common code")
Cc: stable@vger.kernel.org # 4.17+
Signed-off-by: David Rientjes <rientjes@google.com>
---
 kernel/dma/direct.c | 19 ++++++++++++++-----
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -158,6 +158,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
 {
 	struct page *page;
 	void *ret;
+	int err;
 
 	size = PAGE_ALIGN(size);
 
@@ -210,8 +211,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
 	}
 
 	ret = page_address(page);
-	if (force_dma_unencrypted(dev))
-		set_memory_decrypted((unsigned long)ret, 1 << get_order(size));
+	if (force_dma_unencrypted(dev)) {
+		err = set_memory_decrypted((unsigned long)ret,
+					   1 << get_order(size));
+		if (err)
+			goto out_free_pages;
+	}
 
 	memset(ret, 0, size);
 
@@ -229,9 +234,13 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
 		*dma_handle = phys_to_dma(dev, page_to_phys(page));
 	return ret;
 out_encrypt_pages:
-	if (force_dma_unencrypted(dev))
-		set_memory_encrypted((unsigned long)page_address(page),
-				     1 << get_order(size));
+	if (force_dma_unencrypted(dev)) {
+		err = set_memory_encrypted((unsigned long)page_address(page),
+					   1 << get_order(size));
+		/* If memory cannot be re-encrypted, it must be leaked */
+		if (err)
+			return NULL;
+	}
 out_free_pages:
 	dma_free_contiguous(dev, page, size);
 	return NULL;

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [patch for-5.8 4/4] dma-direct: add missing set_memory_decrypted() for coherent mapping
  2020-06-11 19:20 [patch for-5.8 0/4] dma-direct: dma_direct_alloc_pages() fixes for AMD SEV David Rientjes
                   ` (2 preceding siblings ...)
  2020-06-11 19:20 ` [patch for-5.8 3/4] dma-direct: check return value when encrypting or decrypting memory David Rientjes
@ 2020-06-11 19:20 ` David Rientjes
  2020-06-15  7:00   ` Christoph Hellwig
  3 siblings, 1 reply; 8+ messages in thread
From: David Rientjes @ 2020-06-11 19:20 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Thomas Lendacky, Brijesh Singh, Marek Szyprowski, Robin Murphy,
	iommu, linux-kernel

When a coherent mapping is created in dma_direct_alloc_pages(), it needs
to be decrypted if the device requires unencrypted DMA before returning.

Fixes: 3acac065508f ("dma-mapping: merge the generic remapping helpers
into dma-direct")
Cc: stable@vger.kernel.org # 5.5+
Signed-off-by: David Rientjes <rientjes@google.com>
---
 kernel/dma/direct.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -195,6 +195,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
 				__builtin_return_address(0));
 		if (!ret)
 			goto out_free_pages;
+		if (force_dma_unencrypted(dev)) {
+			err = set_memory_decrypted((unsigned long)ret,
+						   1 << get_order(size));
+			if (err)
+				goto out_free_pages;
+		}
 		memset(ret, 0, size);
 		goto done;
 	}

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch for-5.8 1/4] dma-direct: always align allocation size in dma_direct_alloc_pages()
  2020-06-11 19:20 ` [patch for-5.8 1/4] dma-direct: always align allocation size in dma_direct_alloc_pages() David Rientjes
@ 2020-06-15  6:54   ` Christoph Hellwig
  0 siblings, 0 replies; 8+ messages in thread
From: Christoph Hellwig @ 2020-06-15  6:54 UTC (permalink / raw)
  To: David Rientjes
  Cc: Christoph Hellwig, Thomas Lendacky, Brijesh Singh,
	Marek Szyprowski, Robin Murphy, iommu, linux-kernel

On Thu, Jun 11, 2020 at 12:20:28PM -0700, David Rientjes wrote:
> dma_alloc_contiguous() does size >> PAGE_SHIFT and set_memory_decrypted()
> works at page granularity.  It's necessary to page align the allocation
> size in dma_direct_alloc_pages() for consistent behavior.
> 
> This also fixes an issue when arch_dma_prep_coherent() is called on an
> unaligned allocation size for dma_alloc_need_uncached() when
> CONFIG_DMA_DIRECT_REMAP is disabled but CONFIG_ARCH_HAS_DMA_SET_UNCACHED
> is enabled.
> 
> Cc: stable@vger.kernel.org
> Signed-off-by: David Rientjes <rientjes@google.com>
> ---
>  kernel/dma/direct.c | 17 ++++++++++-------
>  1 file changed, 10 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -112,11 +112,12 @@ static inline bool dma_should_free_from_pool(struct device *dev,
>  struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
>  		gfp_t gfp, unsigned long attrs)
>  {
> -	size_t alloc_size = PAGE_ALIGN(size);
>  	int node = dev_to_node(dev);
>  	struct page *page = NULL;
>  	u64 phys_limit;
>  
> +	VM_BUG_ON(!PAGE_ALIGNED(size));

This really should be a WARN_ON_ONCE, but I've fixed this up before
applying.  I've also added a prep patch to mark __dma_direct_alloc_pages
static as part of auditing for other callers.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch for-5.8 2/4] dma-direct: re-encrypt memory if dma_direct_alloc_pages() fails
  2020-06-11 19:20 ` [patch for-5.8 2/4] dma-direct: re-encrypt memory if dma_direct_alloc_pages() fails David Rientjes
@ 2020-06-15  6:56   ` Christoph Hellwig
  0 siblings, 0 replies; 8+ messages in thread
From: Christoph Hellwig @ 2020-06-15  6:56 UTC (permalink / raw)
  To: David Rientjes
  Cc: Christoph Hellwig, Thomas Lendacky, Brijesh Singh,
	Marek Szyprowski, Robin Murphy, iommu, linux-kernel

On Thu, Jun 11, 2020 at 12:20:29PM -0700, David Rientjes wrote:
> If arch_dma_set_uncached() fails after memory has been decrypted, it needs
> to be re-encrypted before freeing.
> 
> Fixes: fa7e2247c572 ("dma-direct: make uncached_kernel_address more
> general")
> Cc: stable@vger.kernel.org # 5.7
> Signed-off-by: David Rientjes <rientjes@google.com>

Note that this can't really happen in practice as
CONFIG_ARCH_HAS_DMA_SET_UNCACHED and memory encryption are mutally
exclusive in pracrie.  Still looks ok and useful otherwise.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch for-5.8 4/4] dma-direct: add missing set_memory_decrypted() for coherent mapping
  2020-06-11 19:20 ` [patch for-5.8 4/4] dma-direct: add missing set_memory_decrypted() for coherent mapping David Rientjes
@ 2020-06-15  7:00   ` Christoph Hellwig
  0 siblings, 0 replies; 8+ messages in thread
From: Christoph Hellwig @ 2020-06-15  7:00 UTC (permalink / raw)
  To: David Rientjes
  Cc: Christoph Hellwig, Thomas Lendacky, Brijesh Singh,
	Marek Szyprowski, Robin Murphy, iommu, linux-kernel

On Thu, Jun 11, 2020 at 12:20:32PM -0700, David Rientjes wrote:
> When a coherent mapping is created in dma_direct_alloc_pages(), it needs
> to be decrypted if the device requires unencrypted DMA before returning.
> 
> Fixes: 3acac065508f ("dma-mapping: merge the generic remapping helpers
> into dma-direct")
> Cc: stable@vger.kernel.org # 5.5+
> Signed-off-by: David Rientjes <rientjes@google.com>
> ---
>  kernel/dma/direct.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -195,6 +195,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
>  				__builtin_return_address(0));
>  		if (!ret)
>  			goto out_free_pages;
> +		if (force_dma_unencrypted(dev)) {
> +			err = set_memory_decrypted((unsigned long)ret,
> +						   1 << get_order(size));
> +			if (err)
> +				goto out_free_pages;
> +		}

Note that ret is a vmalloc address here.  Does set_memory_decrypted
work for that case?  Again this should be mostly theoretical, so I'm
not too worried for now.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-06-15  7:01 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-11 19:20 [patch for-5.8 0/4] dma-direct: dma_direct_alloc_pages() fixes for AMD SEV David Rientjes
2020-06-11 19:20 ` [patch for-5.8 1/4] dma-direct: always align allocation size in dma_direct_alloc_pages() David Rientjes
2020-06-15  6:54   ` Christoph Hellwig
2020-06-11 19:20 ` [patch for-5.8 2/4] dma-direct: re-encrypt memory if dma_direct_alloc_pages() fails David Rientjes
2020-06-15  6:56   ` Christoph Hellwig
2020-06-11 19:20 ` [patch for-5.8 3/4] dma-direct: check return value when encrypting or decrypting memory David Rientjes
2020-06-11 19:20 ` [patch for-5.8 4/4] dma-direct: add missing set_memory_decrypted() for coherent mapping David Rientjes
2020-06-15  7:00   ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).