LKML Archive on lore.kernel.org
 help / color / Atom feed
* SME/32-bit regression
@ 2017-09-06  3:45 Boris Ostrovsky
  2017-09-06  9:26 ` Borislav Petkov
  0 siblings, 1 reply; 13+ messages in thread
From: Boris Ostrovsky @ 2017-09-06  3:45 UTC (permalink / raw)
  To: thomas.lendacky
  Cc: X86 ML, linux-kernel, Ingo Molnar, H. Peter Anvin, Thomas Gleixner

It appears there is a regression for 32-bit kernels due to SME changes.

I bisected my particular problem (Xen PV guest) to 
21729f81ce8ae76a6995681d40e16f7ce8075db4 but I also saw pmd_clear_bad() 
errors on baremetal. This seems to be caused by sme_me_mask being an 
unsigned long as opposed to phys_addr_t (the actual problem is that 
__PHYSICAL_MASK is truncated). When I declare it as u64 and drop 
unsigned long cast in __sme_set()/__sme_clr() the problem goes way. 
(This presumably won't work for non-PAE which I haven't tried).


-boris

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: SME/32-bit regression
  2017-09-06  3:45 SME/32-bit regression Boris Ostrovsky
@ 2017-09-06  9:26 ` Borislav Petkov
  2017-09-06  9:45   ` Borislav Petkov
  2017-09-06 13:54   ` SME/32-bit regression Boris Ostrovsky
  0 siblings, 2 replies; 13+ messages in thread
From: Borislav Petkov @ 2017-09-06  9:26 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: thomas.lendacky, X86 ML, linux-kernel, Ingo Molnar,
	H. Peter Anvin, Thomas Gleixner

On Tue, Sep 05, 2017 at 11:45:07PM -0400, Boris Ostrovsky wrote:
> It appears there is a regression for 32-bit kernels due to SME changes.
> 
> I bisected my particular problem

It being? Doesn't boot, splats?

> (Xen PV guest) to
> 21729f81ce8ae76a6995681d40e16f7ce8075db4 but I also saw pmd_clear_bad()
> errors on baremetal. This seems to be caused by sme_me_mask being an
> unsigned long as opposed to phys_addr_t (the actual problem is that
> __PHYSICAL_MASK is truncated). When I declare it as u64 and drop unsigned
> long cast in __sme_set()/__sme_clr() the problem goes way. (This presumably
> won't work for non-PAE which I haven't tried).

Right, so I think we should do this because those macros should not have
any effect on !CONFIG_AMD_MEM_ENCRYPT setups.

---
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index 1255f09f5e42..823eec6ba951 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -35,6 +35,7 @@ static inline unsigned long sme_get_me_mask(void)
 	return sme_me_mask;
 }
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
 /*
  * The __sme_set() and __sme_clr() macros are useful for adding or removing
  * the encryption mask from a value (e.g. when dealing with pagetable
@@ -42,6 +43,10 @@ static inline unsigned long sme_get_me_mask(void)
  */
 #define __sme_set(x)		((unsigned long)(x) | sme_me_mask)
 #define __sme_clr(x)		((unsigned long)(x) & ~sme_me_mask)
+#else
+#define __sme_set(x)		(x)
+#define __sme_clr(x)		(x)
+#endif
 
 #endif	/* __ASSEMBLY__ */
 



-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: SME/32-bit regression
  2017-09-06  9:26 ` Borislav Petkov
@ 2017-09-06  9:45   ` Borislav Petkov
       [not found]     ` <CY4PR12MB11415936E980489690575E65EC970@CY4PR12MB1141.namprd12.prod.outlook.com>
  2017-09-06 13:54   ` SME/32-bit regression Boris Ostrovsky
  1 sibling, 1 reply; 13+ messages in thread
From: Borislav Petkov @ 2017-09-06  9:45 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: Boris Ostrovsky, thomas.lendacky, X86 ML, linux-kernel,
	Ingo Molnar, H. Peter Anvin, Thomas Gleixner

Btw,

Tom, pls remind me again, why didn't we make sme_me_mask u64?

Because this is the most logical type for a memory encryption mask which
covers 64 bits... In any case, the below builds fine here.

---
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 8e618fcf1f7c..6a77c63540f7 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -21,7 +21,7 @@
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 
-extern unsigned long sme_me_mask;
+extern u64 sme_me_mask;
 
 void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr,
 			 unsigned long decrypted_kernel_vaddr,
@@ -49,7 +49,7 @@ void swiotlb_set_mem_attributes(void *vaddr, unsigned long size);
 
 #else	/* !CONFIG_AMD_MEM_ENCRYPT */
 
-#define sme_me_mask	0UL
+#define sme_me_mask	0ULL
 
 static inline void __init sme_early_encrypt(resource_size_t paddr,
 					    unsigned long size) { }
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 0fbd09269757..3fcc8e01683b 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -37,7 +37,7 @@ static char sme_cmdline_off[] __initdata = "off";
  * reside in the .data section so as not to be zeroed out when the .bss
  * section is later cleared.
  */
-unsigned long sme_me_mask __section(.data) = 0;
+u64 sme_me_mask __section(.data) = 0;
 EXPORT_SYMBOL_GPL(sme_me_mask);
 
 /* Buffer used for early in-place encryption by BSP, no locking needed */
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index 1255f09f5e42..265a9cd21cb4 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -21,7 +21,7 @@
 
 #else	/* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
 
-#define sme_me_mask	0UL
+#define sme_me_mask	0ULL
 
 #endif	/* CONFIG_ARCH_HAS_MEM_ENCRYPT */
 
@@ -30,18 +30,23 @@ static inline bool sme_active(void)
 	return !!sme_me_mask;
 }
 
-static inline unsigned long sme_get_me_mask(void)
+static inline u64 sme_get_me_mask(void)
 {
 	return sme_me_mask;
 }
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
 /*
  * The __sme_set() and __sme_clr() macros are useful for adding or removing
  * the encryption mask from a value (e.g. when dealing with pagetable
  * entries).
  */
-#define __sme_set(x)		((unsigned long)(x) | sme_me_mask)
-#define __sme_clr(x)		((unsigned long)(x) & ~sme_me_mask)
+#define __sme_set(x)		((x) | sme_me_mask)
+#define __sme_clr(x)		((x) & ~sme_me_mask)
+#else
+#define __sme_set(x)		(x)
+#define __sme_clr(x)		(x)
+#endif
 
 #endif	/* __ASSEMBLY__ */
 

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: SME/32-bit regression
  2017-09-06  9:26 ` Borislav Petkov
  2017-09-06  9:45   ` Borislav Petkov
@ 2017-09-06 13:54   ` Boris Ostrovsky
  2017-09-06 13:57     ` Thomas Gleixner
  1 sibling, 1 reply; 13+ messages in thread
From: Boris Ostrovsky @ 2017-09-06 13:54 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: thomas.lendacky, X86 ML, linux-kernel, Ingo Molnar,
	H. Peter Anvin, Thomas Gleixner

On 09/06/2017 05:26 AM, Borislav Petkov wrote:
> On Tue, Sep 05, 2017 at 11:45:07PM -0400, Boris Ostrovsky wrote:
>> It appears there is a regression for 32-bit kernels due to SME changes.
>>
>> I bisected my particular problem
> It being? Doesn't boot, splats?

Xen guest crashes very early, before a splat can can be generated.

>
>> (Xen PV guest) to
>> 21729f81ce8ae76a6995681d40e16f7ce8075db4 but I also saw pmd_clear_bad()
>> errors on baremetal. This seems to be caused by sme_me_mask being an
>> unsigned long as opposed to phys_addr_t (the actual problem is that
>> __PHYSICAL_MASK is truncated). When I declare it as u64 and drop unsigned
>> long cast in __sme_set()/__sme_clr() the problem goes way. (This presumably
>> won't work for non-PAE which I haven't tried).
> Right, so I think we should do this because those macros should not have
> any effect on !CONFIG_AMD_MEM_ENCRYPT setups.

This won't help though if kernel is built with SME support.

-boris

>
> ---
> diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
> index 1255f09f5e42..823eec6ba951 100644
> --- a/include/linux/mem_encrypt.h
> +++ b/include/linux/mem_encrypt.h
> @@ -35,6 +35,7 @@ static inline unsigned long sme_get_me_mask(void)
>  	return sme_me_mask;
>  }
>  
> +#ifdef CONFIG_AMD_MEM_ENCRYPT
>  /*
>   * The __sme_set() and __sme_clr() macros are useful for adding or removing
>   * the encryption mask from a value (e.g. when dealing with pagetable
> @@ -42,6 +43,10 @@ static inline unsigned long sme_get_me_mask(void)
>   */
>  #define __sme_set(x)		((unsigned long)(x) | sme_me_mask)
>  #define __sme_clr(x)		((unsigned long)(x) & ~sme_me_mask)
> +#else
> +#define __sme_set(x)		(x)
> +#define __sme_clr(x)		(x)
> +#endif
>  
>  #endif	/* __ASSEMBLY__ */
>  
>
>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: SME/32-bit regression
  2017-09-06 13:54   ` SME/32-bit regression Boris Ostrovsky
@ 2017-09-06 13:57     ` Thomas Gleixner
  2017-09-06 14:03       ` Boris Ostrovsky
  0 siblings, 1 reply; 13+ messages in thread
From: Thomas Gleixner @ 2017-09-06 13:57 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Borislav Petkov, thomas.lendacky, X86 ML, linux-kernel,
	Ingo Molnar, H. Peter Anvin

On Wed, 6 Sep 2017, Boris Ostrovsky wrote:
> On 09/06/2017 05:26 AM, Borislav Petkov wrote:
> > On Tue, Sep 05, 2017 at 11:45:07PM -0400, Boris Ostrovsky wrote:
> >> It appears there is a regression for 32-bit kernels due to SME changes.
> >>
> >> I bisected my particular problem
> > It being? Doesn't boot, splats?
> 
> Xen guest crashes very early, before a splat can can be generated.
> 
> >
> >> (Xen PV guest) to
> >> 21729f81ce8ae76a6995681d40e16f7ce8075db4 but I also saw pmd_clear_bad()
> >> errors on baremetal. This seems to be caused by sme_me_mask being an
> >> unsigned long as opposed to phys_addr_t (the actual problem is that
> >> __PHYSICAL_MASK is truncated). When I declare it as u64 and drop unsigned
> >> long cast in __sme_set()/__sme_clr() the problem goes way. (This presumably
> >> won't work for non-PAE which I haven't tried).
> > Right, so I think we should do this because those macros should not have
> > any effect on !CONFIG_AMD_MEM_ENCRYPT setups.
> 
> This won't help though if kernel is built with SME support.

Which is not the case for 32bit. SME depends on 64bit

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: SME/32-bit regression
  2017-09-06 13:57     ` Thomas Gleixner
@ 2017-09-06 14:03       ` Boris Ostrovsky
  0 siblings, 0 replies; 13+ messages in thread
From: Boris Ostrovsky @ 2017-09-06 14:03 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Borislav Petkov, thomas.lendacky, X86 ML, linux-kernel,
	Ingo Molnar, H. Peter Anvin

On 09/06/2017 09:57 AM, Thomas Gleixner wrote:
> On Wed, 6 Sep 2017, Boris Ostrovsky wrote:
>> On 09/06/2017 05:26 AM, Borislav Petkov wrote:
>>> On Tue, Sep 05, 2017 at 11:45:07PM -0400, Boris Ostrovsky wrote:
>>>> It appears there is a regression for 32-bit kernels due to SME changes.
>>>>
>>>> I bisected my particular problem
>>> It being? Doesn't boot, splats?
>> Xen guest crashes very early, before a splat can can be generated.
>>
>>>> (Xen PV guest) to
>>>> 21729f81ce8ae76a6995681d40e16f7ce8075db4 but I also saw pmd_clear_bad()
>>>> errors on baremetal. This seems to be caused by sme_me_mask being an
>>>> unsigned long as opposed to phys_addr_t (the actual problem is that
>>>> __PHYSICAL_MASK is truncated). When I declare it as u64 and drop unsigned
>>>> long cast in __sme_set()/__sme_clr() the problem goes way. (This presumably
>>>> won't work for non-PAE which I haven't tried).
>>> Right, so I think we should do this because those macros should not have
>>> any effect on !CONFIG_AMD_MEM_ENCRYPT setups.
>> This won't help though if kernel is built with SME support.
> Which is not the case for 32bit. SME depends on 64bit

Oh, OK, I didn't realize that.

-boris

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: SME/32-bit regression
       [not found]     ` <CY4PR12MB11415936E980489690575E65EC970@CY4PR12MB1141.namprd12.prod.outlook.com>
@ 2017-09-06 16:44       ` Borislav Petkov
  2017-09-06 18:06         ` Brijesh Singh
  0 siblings, 1 reply; 13+ messages in thread
From: Borislav Petkov @ 2017-09-06 16:44 UTC (permalink / raw)
  To: Lendacky, Thomas
  Cc: Boris Ostrovsky, X86 ML, linux-kernel, Ingo Molnar,
	H. Peter Anvin, Thomas Gleixner, Singh, Brijesh

On Wed, Sep 06, 2017 at 04:30:23PM +0000, Lendacky, Thomas wrote:
> Sorry for the top post, I'm on holiday and don't have access to a good
> email client... I went with unsigned long to match all the page table
> related declarations. If changing to u64 doesn't generate any warnings
> or other issues them I'm good with that.

Ok, no worries. Lemme run the smoke-tests on it and test it to see
everything else still works.

Thx.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: SME/32-bit regression
  2017-09-06 16:44       ` Borislav Petkov
@ 2017-09-06 18:06         ` Brijesh Singh
  2017-09-06 18:19           ` Borislav Petkov
  0 siblings, 1 reply; 13+ messages in thread
From: Brijesh Singh @ 2017-09-06 18:06 UTC (permalink / raw)
  To: Borislav Petkov, Lendacky, Thomas
  Cc: brijesh.singh, Boris Ostrovsky, X86 ML, linux-kernel,
	Ingo Molnar, H. Peter Anvin, Thomas Gleixner



On 09/06/2017 11:44 AM, Borislav Petkov wrote:
> On Wed, Sep 06, 2017 at 04:30:23PM +0000, Lendacky, Thomas wrote:
>> Sorry for the top post, I'm on holiday and don't have access to a good
>> email client... I went with unsigned long to match all the page table
>> related declarations. If changing to u64 doesn't generate any warnings
>> or other issues them I'm good with that.
> 
> Ok, no worries. Lemme run the smoke-tests on it and test it to see
> everything else still works.
> 

I did the following quick run with your patch and everything seems to be
working okay

64-bit build:
-------
1) Baremetal SME *enabled* - System boots fine
  a) 32-bit guest launch : successful (KVM HV)
  b) 64-bit guest launch : successful (KVM HV)
  c) 64-bit SEV guest launch : successful (KVM HV)

2) Baremetal SME *disabled* - System boots fine
  a) 32-bit guest launch : successful (KVM HV)
  b) 64-bit guest launch : successful (KVM HV)
  c) 64-bit SEV guest launch : successful (KVM HV)

32-bit build
----------
I am installing 32-bit distro to verify 32-bit baremetal boot and will
report my findings soon.

-Brijesh

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: SME/32-bit regression
  2017-09-06 18:06         ` Brijesh Singh
@ 2017-09-06 18:19           ` Borislav Petkov
  2017-09-06 21:03             ` Boris Ostrovsky
  0 siblings, 1 reply; 13+ messages in thread
From: Borislav Petkov @ 2017-09-06 18:19 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: Lendacky, Thomas, Boris Ostrovsky, X86 ML, linux-kernel,
	Ingo Molnar, H. Peter Anvin, Thomas Gleixner

On Wed, Sep 06, 2017 at 01:06:50PM -0500, Brijesh Singh wrote:
> I did the following quick run with your patch and everything seems to be
> working okay
> 
> 64-bit build:
> -------
> 1) Baremetal SME *enabled* - System boots fine
>  a) 32-bit guest launch : successful (KVM HV)
>  b) 64-bit guest launch : successful (KVM HV)
>  c) 64-bit SEV guest launch : successful (KVM HV)
> 
> 2) Baremetal SME *disabled* - System boots fine
>  a) 32-bit guest launch : successful (KVM HV)
>  b) 64-bit guest launch : successful (KVM HV)
>  c) 64-bit SEV guest launch : successful (KVM HV)
> 
> 32-bit build
> ----------
> I am installing 32-bit distro to verify 32-bit baremetal boot and will
> report my findings soon.

Thanks Brijesh, that's awesome!

I'll add your Tested-by once you're done testing successfully.

:-)

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: SME/32-bit regression
  2017-09-06 18:19           ` Borislav Petkov
@ 2017-09-06 21:03             ` Boris Ostrovsky
  2017-09-07  0:26               ` Brijesh Singh
  0 siblings, 1 reply; 13+ messages in thread
From: Boris Ostrovsky @ 2017-09-06 21:03 UTC (permalink / raw)
  To: Borislav Petkov, Brijesh Singh
  Cc: Lendacky, Thomas, X86 ML, linux-kernel, Ingo Molnar,
	H. Peter Anvin, Thomas Gleixner

On 09/06/2017 02:19 PM, Borislav Petkov wrote:
> On Wed, Sep 06, 2017 at 01:06:50PM -0500, Brijesh Singh wrote:
>> I did the following quick run with your patch and everything seems to be
>> working okay
>>
>> 64-bit build:
>> -------
>> 1) Baremetal SME *enabled* - System boots fine
>>  a) 32-bit guest launch : successful (KVM HV)
>>  b) 64-bit guest launch : successful (KVM HV)
>>  c) 64-bit SEV guest launch : successful (KVM HV)
>>
>> 2) Baremetal SME *disabled* - System boots fine
>>  a) 32-bit guest launch : successful (KVM HV)
>>  b) 64-bit guest launch : successful (KVM HV)
>>  c) 64-bit SEV guest launch : successful (KVM HV)
>>
>> 32-bit build
>> ----------
>> I am installing 32-bit distro to verify 32-bit baremetal boot and will
>> report my findings soon.
> Thanks Brijesh, that's awesome!
>
> I'll add your Tested-by once you're done testing successfully.


You can have my Tested-by (mostly Xen but some baremetal too).

-boris

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: SME/32-bit regression
  2017-09-06 21:03             ` Boris Ostrovsky
@ 2017-09-07  0:26               ` Brijesh Singh
  2017-09-07  9:38                 ` [PATCH] x86/mm: Make the SME mask a u64 Borislav Petkov
  0 siblings, 1 reply; 13+ messages in thread
From: Brijesh Singh @ 2017-09-07  0:26 UTC (permalink / raw)
  To: Boris Ostrovsky, Borislav Petkov
  Cc: brijesh.singh, Lendacky, Thomas, X86 ML, linux-kernel,
	Ingo Molnar, H. Peter Anvin, Thomas Gleixner



On 09/06/2017 04:03 PM, Boris Ostrovsky wrote:
> On 09/06/2017 02:19 PM, Borislav Petkov wrote:
>> On Wed, Sep 06, 2017 at 01:06:50PM -0500, Brijesh Singh wrote:
>>> I did the following quick run with your patch and everything seems to be
>>> working okay
>>>
>>> 64-bit build:
>>> -------
>>> 1) Baremetal SME *enabled* - System boots fine
>>>   a) 32-bit guest launch : successful (KVM HV)
>>>   b) 64-bit guest launch : successful (KVM HV)
>>>   c) 64-bit SEV guest launch : successful (KVM HV)
>>>
>>> 2) Baremetal SME *disabled* - System boots fine
>>>   a) 32-bit guest launch : successful (KVM HV)
>>>   b) 64-bit guest launch : successful (KVM HV)
>>>   c) 64-bit SEV guest launch : successful (KVM HV)
>>>
>>> 32-bit build
>>> ----------
>>> I am installing 32-bit distro to verify 32-bit baremetal boot and will
>>> report my findings soon.
>> Thanks Brijesh, that's awesome!
>>
>> I'll add your Tested-by once you're done testing successfully.
> 

32-bit seems to be working well - thanks

-Brijesh

> 
> You can have my Tested-by (mostly Xen but some baremetal too).
> 
> -boris
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] x86/mm: Make the SME mask a u64
  2017-09-07  0:26               ` Brijesh Singh
@ 2017-09-07  9:38                 ` Borislav Petkov
  2017-09-07 10:34                   ` [tip:x86/urgent] " tip-bot for Borislav Petkov
  0 siblings, 1 reply; 13+ messages in thread
From: Borislav Petkov @ 2017-09-07  9:38 UTC (permalink / raw)
  To: x86-ml
  Cc: Brijesh Singh, Boris Ostrovsky, Lendacky, Thomas, linux-kernel,
	Ingo Molnar, H. Peter Anvin, Thomas Gleixner

On Wed, Sep 06, 2017 at 07:26:40PM -0500, Brijesh Singh wrote:
> 32-bit seems to be working well - thanks

Thanks Boris and Brijesh for testing!

---
From: Borislav Petkov <bp@suse.de>

The SME encryption mask is for masking 64-bit pagetable entries. It
being an unsigned long works fine on X86_64 but on 32-bit builds in
truncates bits leading to Xen guests crashing very early.

And regardless, the whole SME mask handling shouldnt've leaked into
32-bit because SME is X86_64-only feature. So, first make the mask u64.
And then, add trivial 32-bit versions of the __sme_* macros so that
nothing happens there.

Reported-and-tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Tom Lendacky <Thomas.Lendacky@amd.com>
Fixes: 21729f81ce8a ("x86/mm: Provide general kernel support for memory encryption")
---
 arch/x86/include/asm/mem_encrypt.h |  4 ++--
 arch/x86/mm/mem_encrypt.c          |  2 +-
 include/linux/mem_encrypt.h        | 13 +++++++++----
 3 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 8e618fcf1f7c..6a77c63540f7 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -21,7 +21,7 @@
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 
-extern unsigned long sme_me_mask;
+extern u64 sme_me_mask;
 
 void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr,
 			 unsigned long decrypted_kernel_vaddr,
@@ -49,7 +49,7 @@ void swiotlb_set_mem_attributes(void *vaddr, unsigned long size);
 
 #else	/* !CONFIG_AMD_MEM_ENCRYPT */
 
-#define sme_me_mask	0UL
+#define sme_me_mask	0ULL
 
 static inline void __init sme_early_encrypt(resource_size_t paddr,
 					    unsigned long size) { }
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 0fbd09269757..3fcc8e01683b 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -37,7 +37,7 @@ static char sme_cmdline_off[] __initdata = "off";
  * reside in the .data section so as not to be zeroed out when the .bss
  * section is later cleared.
  */
-unsigned long sme_me_mask __section(.data) = 0;
+u64 sme_me_mask __section(.data) = 0;
 EXPORT_SYMBOL_GPL(sme_me_mask);
 
 /* Buffer used for early in-place encryption by BSP, no locking needed */
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index 1255f09f5e42..265a9cd21cb4 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -21,7 +21,7 @@
 
 #else	/* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
 
-#define sme_me_mask	0UL
+#define sme_me_mask	0ULL
 
 #endif	/* CONFIG_ARCH_HAS_MEM_ENCRYPT */
 
@@ -30,18 +30,23 @@ static inline bool sme_active(void)
 	return !!sme_me_mask;
 }
 
-static inline unsigned long sme_get_me_mask(void)
+static inline u64 sme_get_me_mask(void)
 {
 	return sme_me_mask;
 }
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
 /*
  * The __sme_set() and __sme_clr() macros are useful for adding or removing
  * the encryption mask from a value (e.g. when dealing with pagetable
  * entries).
  */
-#define __sme_set(x)		((unsigned long)(x) | sme_me_mask)
-#define __sme_clr(x)		((unsigned long)(x) & ~sme_me_mask)
+#define __sme_set(x)		((x) | sme_me_mask)
+#define __sme_clr(x)		((x) & ~sme_me_mask)
+#else
+#define __sme_set(x)		(x)
+#define __sme_clr(x)		(x)
+#endif
 
 #endif	/* __ASSEMBLY__ */
 
-- 
2.13.0

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [tip:x86/urgent] x86/mm: Make the SME mask a u64
  2017-09-07  9:38                 ` [PATCH] x86/mm: Make the SME mask a u64 Borislav Petkov
@ 2017-09-07 10:34                   ` tip-bot for Borislav Petkov
  0 siblings, 0 replies; 13+ messages in thread
From: tip-bot for Borislav Petkov @ 2017-09-07 10:34 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, hpa, bp, linux-kernel, boris.ostrovsky, brijesh.singh,
	peterz, mingo, Thomas.Lendacky, torvalds

Commit-ID:  21d9bb4a05bac50fb4f850517af4030baecd00f6
Gitweb:     http://git.kernel.org/tip/21d9bb4a05bac50fb4f850517af4030baecd00f6
Author:     Borislav Petkov <bp@suse.de>
AuthorDate: Thu, 7 Sep 2017 11:38:37 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 7 Sep 2017 11:53:11 +0200

x86/mm: Make the SME mask a u64

The SME encryption mask is for masking 64-bit pagetable entries. It
being an unsigned long works fine on X86_64 but on 32-bit builds in
truncates bits leading to Xen guests crashing very early.

And regardless, the whole SME mask handling shouldnt've leaked into
32-bit because SME is X86_64-only feature. So, first make the mask u64.
And then, add trivial 32-bit versions of the __sme_* macros so that
nothing happens there.

Reported-and-tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Tom Lendacky <Thomas.Lendacky@amd.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas <Thomas.Lendacky@amd.com>
Fixes: 21729f81ce8a ("x86/mm: Provide general kernel support for memory encryption")
Link: http://lkml.kernel.org/r/20170907093837.76zojtkgebwtqc74@pd.tnic
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/mem_encrypt.h |  4 ++--
 arch/x86/mm/mem_encrypt.c          |  2 +-
 include/linux/mem_encrypt.h        | 13 +++++++++----
 3 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 8e618fc..6a77c63 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -21,7 +21,7 @@
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 
-extern unsigned long sme_me_mask;
+extern u64 sme_me_mask;
 
 void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr,
 			 unsigned long decrypted_kernel_vaddr,
@@ -49,7 +49,7 @@ void swiotlb_set_mem_attributes(void *vaddr, unsigned long size);
 
 #else	/* !CONFIG_AMD_MEM_ENCRYPT */
 
-#define sme_me_mask	0UL
+#define sme_me_mask	0ULL
 
 static inline void __init sme_early_encrypt(resource_size_t paddr,
 					    unsigned long size) { }
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 0fbd092..3fcc8e0 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -37,7 +37,7 @@ static char sme_cmdline_off[] __initdata = "off";
  * reside in the .data section so as not to be zeroed out when the .bss
  * section is later cleared.
  */
-unsigned long sme_me_mask __section(.data) = 0;
+u64 sme_me_mask __section(.data) = 0;
 EXPORT_SYMBOL_GPL(sme_me_mask);
 
 /* Buffer used for early in-place encryption by BSP, no locking needed */
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index 1255f09..265a9cd 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -21,7 +21,7 @@
 
 #else	/* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
 
-#define sme_me_mask	0UL
+#define sme_me_mask	0ULL
 
 #endif	/* CONFIG_ARCH_HAS_MEM_ENCRYPT */
 
@@ -30,18 +30,23 @@ static inline bool sme_active(void)
 	return !!sme_me_mask;
 }
 
-static inline unsigned long sme_get_me_mask(void)
+static inline u64 sme_get_me_mask(void)
 {
 	return sme_me_mask;
 }
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
 /*
  * The __sme_set() and __sme_clr() macros are useful for adding or removing
  * the encryption mask from a value (e.g. when dealing with pagetable
  * entries).
  */
-#define __sme_set(x)		((unsigned long)(x) | sme_me_mask)
-#define __sme_clr(x)		((unsigned long)(x) & ~sme_me_mask)
+#define __sme_set(x)		((x) | sme_me_mask)
+#define __sme_clr(x)		((x) & ~sme_me_mask)
+#else
+#define __sme_set(x)		(x)
+#define __sme_clr(x)		(x)
+#endif
 
 #endif	/* __ASSEMBLY__ */
 

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, back to index

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-06  3:45 SME/32-bit regression Boris Ostrovsky
2017-09-06  9:26 ` Borislav Petkov
2017-09-06  9:45   ` Borislav Petkov
     [not found]     ` <CY4PR12MB11415936E980489690575E65EC970@CY4PR12MB1141.namprd12.prod.outlook.com>
2017-09-06 16:44       ` Borislav Petkov
2017-09-06 18:06         ` Brijesh Singh
2017-09-06 18:19           ` Borislav Petkov
2017-09-06 21:03             ` Boris Ostrovsky
2017-09-07  0:26               ` Brijesh Singh
2017-09-07  9:38                 ` [PATCH] x86/mm: Make the SME mask a u64 Borislav Petkov
2017-09-07 10:34                   ` [tip:x86/urgent] " tip-bot for Borislav Petkov
2017-09-06 13:54   ` SME/32-bit regression Boris Ostrovsky
2017-09-06 13:57     ` Thomas Gleixner
2017-09-06 14:03       ` Boris Ostrovsky

LKML Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git
	git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git
	git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git
	git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git
	git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git
	git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git
	git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git
	git clone --mirror https://lore.kernel.org/lkml/7 lkml/git/7.git
	git clone --mirror https://lore.kernel.org/lkml/8 lkml/git/8.git
	git clone --mirror https://lore.kernel.org/lkml/9 lkml/git/9.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \
		linux-kernel@vger.kernel.org
	public-inbox-index lkml

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git