* [PATCH] dma-direct: Don't over-decrypt memory
@ 2022-05-20 17:10 Robin Murphy
2022-05-22 13:07 ` Christoph Hellwig
2022-05-23 1:04 ` David Rientjes
0 siblings, 2 replies; 3+ messages in thread
From: Robin Murphy @ 2022-05-20 17:10 UTC (permalink / raw)
To: hch; +Cc: m.szyprowski, iommu, linux-kernel, thomas.lendacky, rientjes, stable
The original x86 sev_alloc() only called set_memory_decrypted() on
memory returned by alloc_pages_node(), so the page order calculation
fell out of that logic. However, the common dma-direct code has several
potential allocators, not all of which are guaranteed to round up the
underlying allocation to a power-of-two size, so carrying over that
calculation for the encryption/decryption size was a mistake. Fix it by
rounding to a *number* of pages, rather than an order.
Until recently there was an even worse interaction with DMA_DIRECT_REMAP
where we could have ended up decrypting part of the next adjacent
vmalloc area, only averted by no architecture actually supporting both
configs at once. Don't ask how I found that one out...
CC: stable@vger.kernel.org
Fixes: c10f07aa27da ("dma/direct: Handle force decryption for DMA coherent buffers in common code")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
kernel/dma/direct.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 9743c6ccce1a..09d78aa40466 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -79,7 +79,7 @@ static int dma_set_decrypted(struct device *dev, void *vaddr, size_t size)
{
if (!force_dma_unencrypted(dev))
return 0;
- return set_memory_decrypted((unsigned long)vaddr, 1 << get_order(size));
+ return set_memory_decrypted((unsigned long)vaddr, PFN_UP(size));
}
static int dma_set_encrypted(struct device *dev, void *vaddr, size_t size)
@@ -88,7 +88,7 @@ static int dma_set_encrypted(struct device *dev, void *vaddr, size_t size)
if (!force_dma_unencrypted(dev))
return 0;
- ret = set_memory_encrypted((unsigned long)vaddr, 1 << get_order(size));
+ ret = set_memory_encrypted((unsigned long)vaddr, PFN_UP(size));
if (ret)
pr_warn_ratelimited("leaking DMA memory that can't be re-encrypted\n");
return ret;
--
2.35.3.dirty
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] dma-direct: Don't over-decrypt memory
2022-05-20 17:10 [PATCH] dma-direct: Don't over-decrypt memory Robin Murphy
@ 2022-05-22 13:07 ` Christoph Hellwig
2022-05-23 1:04 ` David Rientjes
1 sibling, 0 replies; 3+ messages in thread
From: Christoph Hellwig @ 2022-05-22 13:07 UTC (permalink / raw)
To: Robin Murphy
Cc: hch, m.szyprowski, iommu, linux-kernel, thomas.lendacky,
rientjes, stable
Thanks,
applied to the dma-mapping for-next branch.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] dma-direct: Don't over-decrypt memory
2022-05-20 17:10 [PATCH] dma-direct: Don't over-decrypt memory Robin Murphy
2022-05-22 13:07 ` Christoph Hellwig
@ 2022-05-23 1:04 ` David Rientjes
1 sibling, 0 replies; 3+ messages in thread
From: David Rientjes @ 2022-05-23 1:04 UTC (permalink / raw)
To: Robin Murphy
Cc: hch, m.szyprowski, iommu, linux-kernel, thomas.lendacky, stable
On Fri, 20 May 2022, Robin Murphy wrote:
> The original x86 sev_alloc() only called set_memory_decrypted() on
> memory returned by alloc_pages_node(), so the page order calculation
> fell out of that logic. However, the common dma-direct code has several
> potential allocators, not all of which are guaranteed to round up the
> underlying allocation to a power-of-two size, so carrying over that
> calculation for the encryption/decryption size was a mistake. Fix it by
> rounding to a *number* of pages, rather than an order.
>
> Until recently there was an even worse interaction with DMA_DIRECT_REMAP
> where we could have ended up decrypting part of the next adjacent
> vmalloc area, only averted by no architecture actually supporting both
> configs at once. Don't ask how I found that one out...
>
> CC: stable@vger.kernel.org
> Fixes: c10f07aa27da ("dma/direct: Handle force decryption for DMA coherent buffers in common code")
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: David Rientjes <rientjes@google.com>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2022-05-23 1:04 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-20 17:10 [PATCH] dma-direct: Don't over-decrypt memory Robin Murphy
2022-05-22 13:07 ` Christoph Hellwig
2022-05-23 1:04 ` David Rientjes
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).