linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/2] Fix SEV user-space mapping of unencrypted coherent memory
@ 2020-03-04 11:45 Thomas Hellström (VMware)
  2020-03-04 11:45 ` [PATCH v3 1/2] x86: Don't let pgprot_modify() change the page encryption bit Thomas Hellström (VMware)
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Thomas Hellström (VMware) @ 2020-03-04 11:45 UTC (permalink / raw)
  To: x86, Christoph Hellwig
  Cc: linux-kernel, Thomas Hellström, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Christian König,
	Marek Szyprowski, Tom Lendacky

This patchset fixes dma_mmap_coherent() mapping of unencrypted memory in
otherwise encrypted environments, where it would incorrectly map that memory as
encrypted.

With SEV and sometimes with SME encryption, The dma api coherent memory is
typically unencrypted, meaning the linear kernel map has the encryption
bit cleared. However, default page protection returned from vm_get_page_prot()
has the encryption bit set. So to compute the correct page protection we need
to clear the encryption bit.

Also, in order for the encryption bit setting to survive across do_mmap() and
mprotect_fixup(), We need to make pgprot_modify() aware of it and not touch it.
Therefore make sme_me_mask part of _PAGE_CHG_MASK and make sure
pgprot_modify() preserves also cleared bits that are part of _PAGE_CHG_MASK,
not just set bits. The use of pgprot_modify() is currently quite limited and
easy to audit.

(Note that the encryption status is not logically encoded in the pfn but in
the page protection even if an address line in the physical address is used).

The patchset has seen some sanity testing by exporting dma_pgprot() and
using it in the vmwgfx mmap handler with SEV enabled.

As far as I can tell there are no current users of dma_mmap_coherent() with
SEV or SME encryption which means that there is no need to CC stable.

Changes since:
RFC:
- Make sme_me_mask port of _PAGE_CHG_MASK rather than using it by its own in
  pgprot_modify().
v1:
- Clarify which use-cases this patchset actually fixes.
v2:
- Use _PAGE_ENC instead of sme_me_mask in the definition of _PAGE_CHG_MASK
v3:
- Added RB from Dave Hansen.

Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Christian König <christian.koenig@amd.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 1/2] x86: Don't let pgprot_modify() change the page encryption bit
  2020-03-04 11:45 [PATCH v4 0/2] Fix SEV user-space mapping of unencrypted coherent memory Thomas Hellström (VMware)
@ 2020-03-04 11:45 ` Thomas Hellström (VMware)
  2020-03-16 19:43   ` Tom Lendacky
  2020-03-17 14:57   ` [tip: x86/mm] " tip-bot2 for Thomas Hellstrom
  2020-03-04 11:45 ` [PATCH v3 2/2] dma-mapping: Fix dma_pgprot() for unencrypted coherent pages Thomas Hellström (VMware)
  2020-03-16 12:42 ` [PATCH v4 0/2] Fix SEV user-space mapping of unencrypted coherent memory Thomas Hellström (VMware)
  2 siblings, 2 replies; 9+ messages in thread
From: Thomas Hellström (VMware) @ 2020-03-04 11:45 UTC (permalink / raw)
  To: x86, Christoph Hellwig
  Cc: linux-kernel, Thomas Hellstrom, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Christian König, Marek Szyprowski,
	Tom Lendacky

From: Thomas Hellstrom <thellstrom@vmware.com>

When SEV or SME is enabled and active, vm_get_page_prot() typically
returns with the encryption bit set. This means that users of
pgprot_modify(, vm_get_page_prot()) (mprotect_fixup, do_mmap) end up with
a value of vma->vm_pg_prot that is not consistent with the intended
protection of the PTEs. This is also important for fault handlers that
rely on the VMA vm_page_prot to set the page protection. Fix this by
not allowing pgprot_modify() to change the encryption bit, similar to
how it's done for PAT bits.

Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Christian König <christian.koenig@amd.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
---
 arch/x86/include/asm/pgtable.h       | 7 +++++--
 arch/x86/include/asm/pgtable_types.h | 2 +-
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index d9925b10e326..c4615032c5ef 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -627,12 +627,15 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
 	return __pmd(val);
 }
 
-/* mprotect needs to preserve PAT bits when updating vm_page_prot */
+/*
+ * mprotect needs to preserve PAT and encryption bits when updating
+ * vm_page_prot
+ */
 #define pgprot_modify pgprot_modify
 static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
 {
 	pgprotval_t preservebits = pgprot_val(oldprot) & _PAGE_CHG_MASK;
-	pgprotval_t addbits = pgprot_val(newprot);
+	pgprotval_t addbits = pgprot_val(newprot) & ~_PAGE_CHG_MASK;
 	return __pgprot(preservebits | addbits);
 }
 
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 0239998d8cdc..65c2ecd730c5 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -118,7 +118,7 @@
  */
 #define _PAGE_CHG_MASK	(PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT |		\
 			 _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY |	\
-			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
+			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC)
 #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
 
 /*
-- 
2.21.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 2/2] dma-mapping: Fix dma_pgprot() for unencrypted coherent pages
  2020-03-04 11:45 [PATCH v4 0/2] Fix SEV user-space mapping of unencrypted coherent memory Thomas Hellström (VMware)
  2020-03-04 11:45 ` [PATCH v3 1/2] x86: Don't let pgprot_modify() change the page encryption bit Thomas Hellström (VMware)
@ 2020-03-04 11:45 ` Thomas Hellström (VMware)
  2020-03-05 15:36   ` Christoph Hellwig
                     ` (2 more replies)
  2020-03-16 12:42 ` [PATCH v4 0/2] Fix SEV user-space mapping of unencrypted coherent memory Thomas Hellström (VMware)
  2 siblings, 3 replies; 9+ messages in thread
From: Thomas Hellström (VMware) @ 2020-03-04 11:45 UTC (permalink / raw)
  To: x86, Christoph Hellwig
  Cc: linux-kernel, Thomas Hellstrom, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Christian König, Marek Szyprowski,
	Tom Lendacky

From: Thomas Hellstrom <thellstrom@vmware.com>

When dma_mmap_coherent() sets up a mapping to unencrypted coherent memory
under SEV encryption and sometimes under SME encryption, it will actually
set up an encrypted mapping rather than an unencrypted, causing devices
that DMAs from that memory to read encrypted contents. Fix this.

When force_dma_unencrypted() returns true, the linear kernel map of the
coherent pages have had the encryption bit explicitly cleared and the
page content is unencrypted. Make sure that any additional PTEs we set
up to these pages also have the encryption bit cleared by having
dma_pgprot() return a protection with the encryption bit cleared in this
case.

Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Christian König <christian.koenig@amd.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
---
 kernel/dma/mapping.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 12ff766ec1fa..98e3d873792e 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -154,6 +154,8 @@ EXPORT_SYMBOL(dma_get_sgtable_attrs);
  */
 pgprot_t dma_pgprot(struct device *dev, pgprot_t prot, unsigned long attrs)
 {
+	if (force_dma_unencrypted(dev))
+		prot = pgprot_decrypted(prot);
 	if (dev_is_dma_coherent(dev) ||
 	    (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) &&
              (attrs & DMA_ATTR_NON_CONSISTENT)))
-- 
2.21.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 2/2] dma-mapping: Fix dma_pgprot() for unencrypted coherent pages
  2020-03-04 11:45 ` [PATCH v3 2/2] dma-mapping: Fix dma_pgprot() for unencrypted coherent pages Thomas Hellström (VMware)
@ 2020-03-05 15:36   ` Christoph Hellwig
  2020-03-16 19:44   ` Tom Lendacky
  2020-03-17 14:57   ` [tip: x86/mm] " tip-bot2 for Thomas Hellstrom
  2 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2020-03-05 15:36 UTC (permalink / raw)
  To: Thomas Hellström (VMware)
  Cc: x86, Christoph Hellwig, linux-kernel, Thomas Hellstrom,
	Dave Hansen, Andy Lutomirski, Peter Zijlstra, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Christian König, Marek Szyprowski, Tom Lendacky

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

x86 maintainers: feel free to pick this up through your tree.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4 0/2] Fix SEV user-space mapping of unencrypted coherent memory
  2020-03-04 11:45 [PATCH v4 0/2] Fix SEV user-space mapping of unencrypted coherent memory Thomas Hellström (VMware)
  2020-03-04 11:45 ` [PATCH v3 1/2] x86: Don't let pgprot_modify() change the page encryption bit Thomas Hellström (VMware)
  2020-03-04 11:45 ` [PATCH v3 2/2] dma-mapping: Fix dma_pgprot() for unencrypted coherent pages Thomas Hellström (VMware)
@ 2020-03-16 12:42 ` Thomas Hellström (VMware)
  2 siblings, 0 replies; 9+ messages in thread
From: Thomas Hellström (VMware) @ 2020-03-16 12:42 UTC (permalink / raw)
  To: x86, Dave Hansen, Ingo Molnar
  Cc: Christoph Hellwig, linux-kernel, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Borislav Petkov, H. Peter Anvin,
	Christian König, Marek Szyprowski, Tom Lendacky

Dave, Ingo

On 3/4/20 12:45 PM, Thomas Hellström (VMware) wrote:
> This patchset fixes dma_mmap_coherent() mapping of unencrypted memory in
> otherwise encrypted environments, where it would incorrectly map that memory as
> encrypted.
>
> With SEV and sometimes with SME encryption, The dma api coherent memory is
> typically unencrypted, meaning the linear kernel map has the encryption
> bit cleared. However, default page protection returned from vm_get_page_prot()
> has the encryption bit set. So to compute the correct page protection we need
> to clear the encryption bit.
>
> Also, in order for the encryption bit setting to survive across do_mmap() and
> mprotect_fixup(), We need to make pgprot_modify() aware of it and not touch it.
> Therefore make sme_me_mask part of _PAGE_CHG_MASK and make sure
> pgprot_modify() preserves also cleared bits that are part of _PAGE_CHG_MASK,
> not just set bits. The use of pgprot_modify() is currently quite limited and
> easy to audit.
>
> (Note that the encryption status is not logically encoded in the pfn but in
> the page protection even if an address line in the physical address is used).
>
> The patchset has seen some sanity testing by exporting dma_pgprot() and
> using it in the vmwgfx mmap handler with SEV enabled.
>
> As far as I can tell there are no current users of dma_mmap_coherent() with
> SEV or SME encryption which means that there is no need to CC stable.
>
> Changes since:
> RFC:
> - Make sme_me_mask port of _PAGE_CHG_MASK rather than using it by its own in
>    pgprot_modify().
> v1:
> - Clarify which use-cases this patchset actually fixes.
> v2:
> - Use _PAGE_ENC instead of sme_me_mask in the definition of _PAGE_CHG_MASK
> v3:
> - Added RB from Dave Hansen.
>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Christoph Hellwig <hch@infradead.org>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
Could we merge this small series through x86?
Patch 2/2 has a

Reviewed-by: Christoph Hellwig<hch@lst.de>

Please let me know if you want me to resend with that RB added.

Thanks,
Thomas


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 1/2] x86: Don't let pgprot_modify() change the page encryption bit
  2020-03-04 11:45 ` [PATCH v3 1/2] x86: Don't let pgprot_modify() change the page encryption bit Thomas Hellström (VMware)
@ 2020-03-16 19:43   ` Tom Lendacky
  2020-03-17 14:57   ` [tip: x86/mm] " tip-bot2 for Thomas Hellstrom
  1 sibling, 0 replies; 9+ messages in thread
From: Tom Lendacky @ 2020-03-16 19:43 UTC (permalink / raw)
  To: Thomas Hellström (VMware), x86, Christoph Hellwig
  Cc: linux-kernel, Thomas Hellstrom, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Christian König, Marek Szyprowski

On 3/4/20 5:45 AM, Thomas Hellström (VMware) wrote:
> From: Thomas Hellstrom <thellstrom@vmware.com>
> 
> When SEV or SME is enabled and active, vm_get_page_prot() typically
> returns with the encryption bit set. This means that users of
> pgprot_modify(, vm_get_page_prot()) (mprotect_fixup, do_mmap) end up with
> a value of vma->vm_pg_prot that is not consistent with the intended
> protection of the PTEs. This is also important for fault handlers that
> rely on the VMA vm_page_prot to set the page protection. Fix this by
> not allowing pgprot_modify() to change the encryption bit, similar to
> how it's done for PAT bits.
> 
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Christoph Hellwig <hch@infradead.org>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>

Acked-by: Tom Lendacky <thomas.lendacky@amd.com>

> ---
>  arch/x86/include/asm/pgtable.h       | 7 +++++--
>  arch/x86/include/asm/pgtable_types.h | 2 +-
>  2 files changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index d9925b10e326..c4615032c5ef 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -627,12 +627,15 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
>  	return __pmd(val);
>  }
>  
> -/* mprotect needs to preserve PAT bits when updating vm_page_prot */
> +/*
> + * mprotect needs to preserve PAT and encryption bits when updating
> + * vm_page_prot
> + */
>  #define pgprot_modify pgprot_modify
>  static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
>  {
>  	pgprotval_t preservebits = pgprot_val(oldprot) & _PAGE_CHG_MASK;
> -	pgprotval_t addbits = pgprot_val(newprot);
> +	pgprotval_t addbits = pgprot_val(newprot) & ~_PAGE_CHG_MASK;
>  	return __pgprot(preservebits | addbits);
>  }
>  
> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
> index 0239998d8cdc..65c2ecd730c5 100644
> --- a/arch/x86/include/asm/pgtable_types.h
> +++ b/arch/x86/include/asm/pgtable_types.h
> @@ -118,7 +118,7 @@
>   */
>  #define _PAGE_CHG_MASK	(PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT |		\
>  			 _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY |	\
> -			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
> +			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC)
>  #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
>  
>  /*
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 2/2] dma-mapping: Fix dma_pgprot() for unencrypted coherent pages
  2020-03-04 11:45 ` [PATCH v3 2/2] dma-mapping: Fix dma_pgprot() for unencrypted coherent pages Thomas Hellström (VMware)
  2020-03-05 15:36   ` Christoph Hellwig
@ 2020-03-16 19:44   ` Tom Lendacky
  2020-03-17 14:57   ` [tip: x86/mm] " tip-bot2 for Thomas Hellstrom
  2 siblings, 0 replies; 9+ messages in thread
From: Tom Lendacky @ 2020-03-16 19:44 UTC (permalink / raw)
  To: Thomas Hellström (VMware), x86, Christoph Hellwig
  Cc: linux-kernel, Thomas Hellstrom, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Christian König, Marek Szyprowski

On 3/4/20 5:45 AM, Thomas Hellström (VMware) wrote:
> From: Thomas Hellstrom <thellstrom@vmware.com>
> 
> When dma_mmap_coherent() sets up a mapping to unencrypted coherent memory
> under SEV encryption and sometimes under SME encryption, it will actually
> set up an encrypted mapping rather than an unencrypted, causing devices
> that DMAs from that memory to read encrypted contents. Fix this.
> 
> When force_dma_unencrypted() returns true, the linear kernel map of the
> coherent pages have had the encryption bit explicitly cleared and the
> page content is unencrypted. Make sure that any additional PTEs we set
> up to these pages also have the encryption bit cleared by having
> dma_pgprot() return a protection with the encryption bit cleared in this
> case.
> 
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Christoph Hellwig <hch@infradead.org>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>

Acked-by: Tom Lendacky <thomas.lendacky@amd.com>

> ---
>  kernel/dma/mapping.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index 12ff766ec1fa..98e3d873792e 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -154,6 +154,8 @@ EXPORT_SYMBOL(dma_get_sgtable_attrs);
>   */
>  pgprot_t dma_pgprot(struct device *dev, pgprot_t prot, unsigned long attrs)
>  {
> +	if (force_dma_unencrypted(dev))
> +		prot = pgprot_decrypted(prot);
>  	if (dev_is_dma_coherent(dev) ||
>  	    (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) &&
>               (attrs & DMA_ATTR_NON_CONSISTENT)))
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [tip: x86/mm] dma-mapping: Fix dma_pgprot() for unencrypted coherent pages
  2020-03-04 11:45 ` [PATCH v3 2/2] dma-mapping: Fix dma_pgprot() for unencrypted coherent pages Thomas Hellström (VMware)
  2020-03-05 15:36   ` Christoph Hellwig
  2020-03-16 19:44   ` Tom Lendacky
@ 2020-03-17 14:57   ` tip-bot2 for Thomas Hellstrom
  2 siblings, 0 replies; 9+ messages in thread
From: tip-bot2 for Thomas Hellstrom @ 2020-03-17 14:57 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Thomas Hellstrom, Borislav Petkov, Christoph Hellwig,
	Tom Lendacky, x86, LKML

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     17c4a2ae15a7aaefe84bdb271952678c5c9cd8e1
Gitweb:        https://git.kernel.org/tip/17c4a2ae15a7aaefe84bdb271952678c5c9cd8e1
Author:        Thomas Hellstrom <thellstrom@vmware.com>
AuthorDate:    Wed, 04 Mar 2020 12:45:27 +01:00
Committer:     Borislav Petkov <bp@suse.de>
CommitterDate: Tue, 17 Mar 2020 11:52:58 +01:00

dma-mapping: Fix dma_pgprot() for unencrypted coherent pages

When dma_mmap_coherent() sets up a mapping to unencrypted coherent memory
under SEV encryption and sometimes under SME encryption, it will actually
set up an encrypted mapping rather than an unencrypted, causing devices
that DMAs from that memory to read encrypted contents. Fix this.

When force_dma_unencrypted() returns true, the linear kernel map of the
coherent pages have had the encryption bit explicitly cleared and the
page content is unencrypted. Make sure that any additional PTEs we set
up to these pages also have the encryption bit cleared by having
dma_pgprot() return a protection with the encryption bit cleared in this
case.

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Tom Lendacky <thomas.lendacky@amd.com>
Link: https://lkml.kernel.org/r/20200304114527.3636-3-thomas_os@shipmail.org
---
 kernel/dma/mapping.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 12ff766..98e3d87 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -154,6 +154,8 @@ EXPORT_SYMBOL(dma_get_sgtable_attrs);
  */
 pgprot_t dma_pgprot(struct device *dev, pgprot_t prot, unsigned long attrs)
 {
+	if (force_dma_unencrypted(dev))
+		prot = pgprot_decrypted(prot);
 	if (dev_is_dma_coherent(dev) ||
 	    (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) &&
              (attrs & DMA_ATTR_NON_CONSISTENT)))

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [tip: x86/mm] x86: Don't let pgprot_modify() change the page encryption bit
  2020-03-04 11:45 ` [PATCH v3 1/2] x86: Don't let pgprot_modify() change the page encryption bit Thomas Hellström (VMware)
  2020-03-16 19:43   ` Tom Lendacky
@ 2020-03-17 14:57   ` tip-bot2 for Thomas Hellstrom
  1 sibling, 0 replies; 9+ messages in thread
From: tip-bot2 for Thomas Hellstrom @ 2020-03-17 14:57 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Thomas Hellstrom, Borislav Petkov, Dave Hansen, Tom Lendacky, x86, LKML

The following commit has been merged into the x86/mm branch of tip:

Commit-ID:     6db73f17c5f155dbcfd5e48e621c706270b84df0
Gitweb:        https://git.kernel.org/tip/6db73f17c5f155dbcfd5e48e621c706270b84df0
Author:        Thomas Hellstrom <thellstrom@vmware.com>
AuthorDate:    Wed, 04 Mar 2020 12:45:26 +01:00
Committer:     Borislav Petkov <bp@suse.de>
CommitterDate: Tue, 17 Mar 2020 11:48:31 +01:00

x86: Don't let pgprot_modify() change the page encryption bit

When SEV or SME is enabled and active, vm_get_page_prot() typically
returns with the encryption bit set. This means that users of
pgprot_modify(, vm_get_page_prot()) (mprotect_fixup(), do_mmap()) end up
with a value of vma->vm_pg_prot that is not consistent with the intended
protection of the PTEs.

This is also important for fault handlers that rely on the VMA
vm_page_prot to set the page protection. Fix this by not allowing
pgprot_modify() to change the encryption bit, similar to how it's done
for PAT bits.

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Acked-by: Tom Lendacky <thomas.lendacky@amd.com>
Link: https://lkml.kernel.org/r/20200304114527.3636-2-thomas_os@shipmail.org
---
 arch/x86/include/asm/pgtable.h       | 7 +++++--
 arch/x86/include/asm/pgtable_types.h | 2 +-
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 7e11866..64a03f2 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -627,12 +627,15 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
 	return __pmd(val);
 }
 
-/* mprotect needs to preserve PAT bits when updating vm_page_prot */
+/*
+ * mprotect needs to preserve PAT and encryption bits when updating
+ * vm_page_prot
+ */
 #define pgprot_modify pgprot_modify
 static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
 {
 	pgprotval_t preservebits = pgprot_val(oldprot) & _PAGE_CHG_MASK;
-	pgprotval_t addbits = pgprot_val(newprot);
+	pgprotval_t addbits = pgprot_val(newprot) & ~_PAGE_CHG_MASK;
 	return __pgprot(preservebits | addbits);
 }
 
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 0239998..65c2ecd 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -118,7 +118,7 @@
  */
 #define _PAGE_CHG_MASK	(PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT |		\
 			 _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY |	\
-			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
+			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC)
 #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
 
 /*

^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-03-17 14:57 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-04 11:45 [PATCH v4 0/2] Fix SEV user-space mapping of unencrypted coherent memory Thomas Hellström (VMware)
2020-03-04 11:45 ` [PATCH v3 1/2] x86: Don't let pgprot_modify() change the page encryption bit Thomas Hellström (VMware)
2020-03-16 19:43   ` Tom Lendacky
2020-03-17 14:57   ` [tip: x86/mm] " tip-bot2 for Thomas Hellstrom
2020-03-04 11:45 ` [PATCH v3 2/2] dma-mapping: Fix dma_pgprot() for unencrypted coherent pages Thomas Hellström (VMware)
2020-03-05 15:36   ` Christoph Hellwig
2020-03-16 19:44   ` Tom Lendacky
2020-03-17 14:57   ` [tip: x86/mm] " tip-bot2 for Thomas Hellstrom
2020-03-16 12:42 ` [PATCH v4 0/2] Fix SEV user-space mapping of unencrypted coherent memory Thomas Hellström (VMware)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).