linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] arm64: fix VTTBR_BADDR_MASK
@ 2014-07-09 16:17 Joel Schopp
  2014-07-10 20:25 ` Christoffer Dall
  0 siblings, 1 reply; 5+ messages in thread
From: Joel Schopp @ 2014-07-09 16:17 UTC (permalink / raw)
  To: linux-arm-kernel

The current calculation for VTTBR_BADDR_MASK masks only 39 bits and not
all 40 bits.  That last bit is important as some systems allocate
from near the top of the available address space.

This patch is necessary to run KVM on an aarch64 SOC I have been testing.

Signed-off-by: Joel Schopp <joel.schopp@amd.com>
---
 arch/arm64/include/asm/kvm_arm.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 3d69030..b39e93f 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -148,7 +148,7 @@
 #endif
 
 #define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
-#define VTTBR_BADDR_MASK  (((1LLU << (40 - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
+#define VTTBR_BADDR_MASK  (0xffffffffffLLU)              /* bits 0-39 */
 #define VTTBR_VMID_SHIFT  (48LLU)
 #define VTTBR_VMID_MASK	  (0xffLLU << VTTBR_VMID_SHIFT)
 

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH] arm64: fix VTTBR_BADDR_MASK
  2014-07-09 16:17 [PATCH] arm64: fix VTTBR_BADDR_MASK Joel Schopp
@ 2014-07-10 20:25 ` Christoffer Dall
  2014-07-10 21:02   ` Joel Schopp
  0 siblings, 1 reply; 5+ messages in thread
From: Christoffer Dall @ 2014-07-10 20:25 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 09, 2014 at 11:17:04AM -0500, Joel Schopp wrote:
> The current calculation for VTTBR_BADDR_MASK masks only 39 bits and not
> all 40 bits.  That last bit is important as some systems allocate
> from near the top of the available address space.
> 
> This patch is necessary to run KVM on an aarch64 SOC I have been testing.
> 
> Signed-off-by: Joel Schopp <joel.schopp@amd.com>
> ---
>  arch/arm64/include/asm/kvm_arm.h |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index 3d69030..b39e93f 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -148,7 +148,7 @@
>  #endif
>  
>  #define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
> -#define VTTBR_BADDR_MASK  (((1LLU << (40 - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
> +#define VTTBR_BADDR_MASK  (0xffffffffffLLU)              /* bits 0-39 */
>  #define VTTBR_VMID_SHIFT  (48LLU)
>  #define VTTBR_VMID_MASK	  (0xffLLU << VTTBR_VMID_SHIFT)
>  
> 

While this is obviously fixing a bug, it doesn't feel like the right
short-term fix.  I'll have to go back and read the definitions of x in
BADDR[47:x] for VTTBR_EL2 exactly again, but the intended use of
VTTBR_BADDR_MASK (and the only occurence of it in C-code) is to deal
with alignment of the allocated pgd.

So while all the MSB in the physical address range of your system should
be used, we still need to check that the pgd we allocate is actually
aligned properly, because writing any bits in [x-1:0} as non-zero can
potentially cause unpredicted behavior and the translation walk results
can be corrupted.

So, you do need to use a proper shift, I'm fine with fixing it for the
assumed 40-bit physical address space for now, but you need to figure
out the right value of 'x', for the BADDR[47:x], and you should change
the code in update_vttbr() as part of this patch, something like
BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK) and make sure the pgd is allocated
at the right alignment, or ensure that the PGD is large enough that we
can just mask off the lower bits and still contain the required number
of entries.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH] arm64: fix VTTBR_BADDR_MASK
  2014-07-10 20:25 ` Christoffer Dall
@ 2014-07-10 21:02   ` Joel Schopp
  2014-07-10 21:51     ` Joel Schopp
  0 siblings, 1 reply; 5+ messages in thread
From: Joel Schopp @ 2014-07-10 21:02 UTC (permalink / raw)
  To: linux-arm-kernel


On 07/10/2014 03:25 PM, Christoffer Dall wrote:
> On Wed, Jul 09, 2014 at 11:17:04AM -0500, Joel Schopp wrote:
>> The current calculation for VTTBR_BADDR_MASK masks only 39 bits and not
>> all 40 bits.  That last bit is important as some systems allocate
>> from near the top of the available address space.
>>
>> This patch is necessary to run KVM on an aarch64 SOC I have been testing.
>>
>> Signed-off-by: Joel Schopp <joel.schopp@amd.com>
>> ---
>>  arch/arm64/include/asm/kvm_arm.h |    2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
>> index 3d69030..b39e93f 100644
>> --- a/arch/arm64/include/asm/kvm_arm.h
>> +++ b/arch/arm64/include/asm/kvm_arm.h
>> @@ -148,7 +148,7 @@
>>  #endif
>>  
>>  #define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
>> -#define VTTBR_BADDR_MASK  (((1LLU << (40 - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
>> +#define VTTBR_BADDR_MASK  (0xffffffffffLLU)              /* bits 0-39 */
>>  #define VTTBR_VMID_SHIFT  (48LLU)
>>  #define VTTBR_VMID_MASK	  (0xffLLU << VTTBR_VMID_SHIFT)
>>  
>>
> While this is obviously fixing a bug, it doesn't feel like the right
> short-term fix.  I'll have to go back and read the definitions of x in
> BADDR[47:x] for VTTBR_EL2 exactly again, but the intended use of
> VTTBR_BADDR_MASK (and the only occurence of it in C-code) is to deal
> with alignment of the allocated pgd.
I think there is some confusion.  Before VTTBR_BADDR_MASK always
evaluated to 0x7fffffffffLLU, after the change it always evaluates to
0xffffffffffLLU

Neither before nor after the patch is it dealing with alignment.  Any
bits it throws away (bits 40-47) are most significant not least significant.

I could have rewritten the macro like:

#define VTTBR_BADDR_MASK  (((1LLU << (40 - VTTBR_X + 1)) - 1) << VTTBR_BADDR_SHIFT)

to correct the bug but it's my opinion that the existing code is quite
obfuscated which is how the bug happened in the first place.  It seemed
easier to just actually mask the bits in a straightforward and easy to
understand manner.  I even added a comment so nobody has to count the fs ;)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH] arm64: fix VTTBR_BADDR_MASK
  2014-07-10 21:02   ` Joel Schopp
@ 2014-07-10 21:51     ` Joel Schopp
  2014-07-11 10:38       ` Christoffer Dall
  0 siblings, 1 reply; 5+ messages in thread
From: Joel Schopp @ 2014-07-10 21:51 UTC (permalink / raw)
  To: linux-arm-kernel


On 07/10/2014 04:02 PM, Joel Schopp wrote:
> On 07/10/2014 03:25 PM, Christoffer Dall wrote:
>> On Wed, Jul 09, 2014 at 11:17:04AM -0500, Joel Schopp wrote:
>>> The current calculation for VTTBR_BADDR_MASK masks only 39 bits and not
>>> all 40 bits.  That last bit is important as some systems allocate
>>> from near the top of the available address space.
>>>
>>> This patch is necessary to run KVM on an aarch64 SOC I have been testing.
>>>
>>> Signed-off-by: Joel Schopp <joel.schopp@amd.com>
>>> ---
>>>  arch/arm64/include/asm/kvm_arm.h |    2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
>>> index 3d69030..b39e93f 100644
>>> --- a/arch/arm64/include/asm/kvm_arm.h
>>> +++ b/arch/arm64/include/asm/kvm_arm.h
>>> @@ -148,7 +148,7 @@
>>>  #endif
>>>  
>>>  #define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
>>> -#define VTTBR_BADDR_MASK  (((1LLU << (40 - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
>>> +#define VTTBR_BADDR_MASK  (0xffffffffffLLU)              /* bits 0-39 */
>>>  #define VTTBR_VMID_SHIFT  (48LLU)
>>>  #define VTTBR_VMID_MASK	  (0xffLLU << VTTBR_VMID_SHIFT)
>>>  
>>>
>> While this is obviously fixing a bug, it doesn't feel like the right
>> short-term fix.  I'll have to go back and read the definitions of x in
>> BADDR[47:x] for VTTBR_EL2 exactly again, but the intended use of
>> VTTBR_BADDR_MASK (and the only occurence of it in C-code) is to deal
>> with alignment of the allocated pgd.
> I think there is some confusion.  Before VTTBR_BADDR_MASK always
> evaluated to 0x7fffffffffLLU, after the change it always evaluates to
> 0xffffffffffLLU
>
> Neither before nor after the patch is it dealing with alignment.  Any
> bits it throws away (bits 40-47) are most significant not least significant.
>
> I could have rewritten the macro like:
>
> #define VTTBR_BADDR_MASK  (((1LLU << (40 - VTTBR_X + 1)) - 1) << VTTBR_BADDR_SHIFT)
>
> to correct the bug but it's my opinion that the existing code is quite
> obfuscated which is how the bug happened in the first place.  It seemed
> easier to just actually mask the bits in a straightforward and easy to
> understand manner.  I even added a comment so nobody has to count the fs ;)
>
I hate to reply to my own email correcting myself.  But you were
correct.  I will fix and resend a v2.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH] arm64: fix VTTBR_BADDR_MASK
  2014-07-10 21:51     ` Joel Schopp
@ 2014-07-11 10:38       ` Christoffer Dall
  0 siblings, 0 replies; 5+ messages in thread
From: Christoffer Dall @ 2014-07-11 10:38 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 10, 2014 at 04:51:06PM -0500, Joel Schopp wrote:
> 
> On 07/10/2014 04:02 PM, Joel Schopp wrote:
> > On 07/10/2014 03:25 PM, Christoffer Dall wrote:
> >> On Wed, Jul 09, 2014 at 11:17:04AM -0500, Joel Schopp wrote:
> >>> The current calculation for VTTBR_BADDR_MASK masks only 39 bits and not
> >>> all 40 bits.  That last bit is important as some systems allocate
> >>> from near the top of the available address space.
> >>>
> >>> This patch is necessary to run KVM on an aarch64 SOC I have been testing.
> >>>
> >>> Signed-off-by: Joel Schopp <joel.schopp@amd.com>
> >>> ---
> >>>  arch/arm64/include/asm/kvm_arm.h |    2 +-
> >>>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>>
> >>> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> >>> index 3d69030..b39e93f 100644
> >>> --- a/arch/arm64/include/asm/kvm_arm.h
> >>> +++ b/arch/arm64/include/asm/kvm_arm.h
> >>> @@ -148,7 +148,7 @@
> >>>  #endif
> >>>  
> >>>  #define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
> >>> -#define VTTBR_BADDR_MASK  (((1LLU << (40 - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
> >>> +#define VTTBR_BADDR_MASK  (0xffffffffffLLU)              /* bits 0-39 */
> >>>  #define VTTBR_VMID_SHIFT  (48LLU)
> >>>  #define VTTBR_VMID_MASK	  (0xffLLU << VTTBR_VMID_SHIFT)
> >>>  
> >>>
> >> While this is obviously fixing a bug, it doesn't feel like the right
> >> short-term fix.  I'll have to go back and read the definitions of x in
> >> BADDR[47:x] for VTTBR_EL2 exactly again, but the intended use of
> >> VTTBR_BADDR_MASK (and the only occurence of it in C-code) is to deal
> >> with alignment of the allocated pgd.
> > I think there is some confusion.  Before VTTBR_BADDR_MASK always
> > evaluated to 0x7fffffffffLLU, after the change it always evaluates to
> > 0xffffffffffLLU
> >
> > Neither before nor after the patch is it dealing with alignment.  Any
> > bits it throws away (bits 40-47) are most significant not least significant.
> >
> > I could have rewritten the macro like:
> >
> > #define VTTBR_BADDR_MASK  (((1LLU << (40 - VTTBR_X + 1)) - 1) << VTTBR_BADDR_SHIFT)
> >
> > to correct the bug but it's my opinion that the existing code is quite
> > obfuscated which is how the bug happened in the first place.  It seemed
> > easier to just actually mask the bits in a straightforward and easy to
> > understand manner.  I even added a comment so nobody has to count the fs ;)
> >
> I hate to reply to my own email correcting myself.  But you were
> correct.  I will fix and resend a v2.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2014-07-11 10:38 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-09 16:17 [PATCH] arm64: fix VTTBR_BADDR_MASK Joel Schopp
2014-07-10 20:25 ` Christoffer Dall
2014-07-10 21:02   ` Joel Schopp
2014-07-10 21:51     ` Joel Schopp
2014-07-11 10:38       ` Christoffer Dall

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).