linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] powerpc/mm/hash: Fix get_region_id() for invalid addresses
@ 2019-05-17 13:29 Michael Ellerman
  2019-05-18  1:56 ` Nicholas Piggin
  2019-05-21 11:39 ` Michael Ellerman
  0 siblings, 2 replies; 4+ messages in thread
From: Michael Ellerman @ 2019-05-17 13:29 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: aneesh.kumar, npiggin

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>

Accesses by userspace to random addresses outside the user or kernel
address range will generate an SLB fault. When we handle that fault we
classify the effective address into several classes, eg. user, kernel
linear, kernel virtual etc.

For addresses that are completely outside of any valid range, we
should not insert an SLB entry at all, and instead immediately an
exception.

In the past this was handled in two ways. Firstly we would check the
top nibble of the address (using REGION_ID(ea)) and that would tell us
if the address was user (0), kernel linear (c), kernel virtual (d), or
vmemmap (f). If the address didn't match any of these it was invalid.

Then for each type of address we would do a secondary check. For the
user region we check against H_PGTABLE_RANGE, for kernel linear we
would mask the top nibble of the address and then check the address
against MAX_PHYSMEM_BITS.

As part of commit 0034d395f89d ("powerpc/mm/hash64: Map all the kernel
regions in the same 0xc range") we replaced REGION_ID() with
get_region_id() and changed the masking of the top nibble to only mask
the top two bits, which introduced a bug.

Addresses less than (4 << 60) are still handled correctly, they are
either less than (1 << 60) in which case they are subject to the
H_PGTABLE_RANGE check, or they are correctly checked against
MAX_PHYSMEM_BITS.

However addresses from (4 << 60) to ((0xc << 60) - 1), are incorrectly
treated as kernel linear addresses in get_region_id(). Then the top
two bits are cleared by EA_MASK in slb_allocate_kernel() and the
address is checked against MAX_PHYSMEM_BITS, which it passes due to
the masking. The end result is we incorrectly insert SLB entries for
those addresses.

That is not actually catastrophic, having inserted the SLB entry we
will then go on to take a page fault for the address and at that point
we detect the problem and report it as a bad fault.

Still we should not be inserting those entries, or treating them as
kernel linear addresses in the first place. So fix get_region_id() to
detect addresses in that range and return an invalid region id, which
we cause use to not insert an SLB entry and directly report an
exception.

Fixes: 0034d395f89d ("powerpc/mm/hash64: Map all the kernel regions in the same 0xc range")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
[mpe: Drop change to EA_MASK for now, rewrite change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/book3s/64/hash.h | 4 ++++
 1 file changed, 4 insertions(+)

v2: Drop change to EA_MASK for now, rewrite change log.

diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index 5486087e64ea..2781ebf6add4 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -93,6 +93,7 @@
 #define VMALLOC_REGION_ID	NON_LINEAR_REGION_ID(H_VMALLOC_START)
 #define IO_REGION_ID		NON_LINEAR_REGION_ID(H_KERN_IO_START)
 #define VMEMMAP_REGION_ID	NON_LINEAR_REGION_ID(H_VMEMMAP_START)
+#define INVALID_REGION_ID	(VMEMMAP_REGION_ID + 1)
 
 /*
  * Defines the address of the vmemap area, in its own region on
@@ -119,6 +120,9 @@ static inline int get_region_id(unsigned long ea)
 	if (id == 0)
 		return USER_REGION_ID;
 
+	if (id != (PAGE_OFFSET >> 60))
+		return INVALID_REGION_ID;
+
 	if (ea < H_KERN_VIRT_START)
 		return LINEAR_MAP_REGION_ID;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] powerpc/mm/hash: Fix get_region_id() for invalid addresses
  2019-05-17 13:29 [PATCH v2] powerpc/mm/hash: Fix get_region_id() for invalid addresses Michael Ellerman
@ 2019-05-18  1:56 ` Nicholas Piggin
  2019-05-18 10:28   ` Michael Ellerman
  2019-05-21 11:39 ` Michael Ellerman
  1 sibling, 1 reply; 4+ messages in thread
From: Nicholas Piggin @ 2019-05-18  1:56 UTC (permalink / raw)
  To: linuxppc-dev, Michael Ellerman; +Cc: aneesh.kumar

Michael Ellerman's on May 17, 2019 11:29 pm:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
> 
> Accesses by userspace to random addresses outside the user or kernel
> address range will generate an SLB fault. When we handle that fault we
> classify the effective address into several classes, eg. user, kernel
> linear, kernel virtual etc.
> 
> For addresses that are completely outside of any valid range, we
> should not insert an SLB entry at all, and instead immediately an
> exception.
> 
> In the past this was handled in two ways. Firstly we would check the
> top nibble of the address (using REGION_ID(ea)) and that would tell us
> if the address was user (0), kernel linear (c), kernel virtual (d), or
> vmemmap (f). If the address didn't match any of these it was invalid.
> 
> Then for each type of address we would do a secondary check. For the
> user region we check against H_PGTABLE_RANGE, for kernel linear we
> would mask the top nibble of the address and then check the address
> against MAX_PHYSMEM_BITS.
> 
> As part of commit 0034d395f89d ("powerpc/mm/hash64: Map all the kernel
> regions in the same 0xc range") we replaced REGION_ID() with
> get_region_id() and changed the masking of the top nibble to only mask
> the top two bits, which introduced a bug.
> 
> Addresses less than (4 << 60) are still handled correctly, they are
> either less than (1 << 60) in which case they are subject to the
> H_PGTABLE_RANGE check, or they are correctly checked against
> MAX_PHYSMEM_BITS.
> 
> However addresses from (4 << 60) to ((0xc << 60) - 1), are incorrectly
> treated as kernel linear addresses in get_region_id(). Then the top
> two bits are cleared by EA_MASK in slb_allocate_kernel() and the
> address is checked against MAX_PHYSMEM_BITS, which it passes due to
> the masking. The end result is we incorrectly insert SLB entries for
> those addresses.
> 
> That is not actually catastrophic, having inserted the SLB entry we
> will then go on to take a page fault for the address and at that point
> we detect the problem and report it as a bad fault.
> 
> Still we should not be inserting those entries, or treating them as
> kernel linear addresses in the first place. So fix get_region_id() to
> detect addresses in that range and return an invalid region id, which
> we cause use to not insert an SLB entry and directly report an
> exception.
> 
> Fixes: 0034d395f89d ("powerpc/mm/hash64: Map all the kernel regions in the same 0xc range")
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> [mpe: Drop change to EA_MASK for now, rewrite change log]
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>

Looks good to me.

> ---
>  arch/powerpc/include/asm/book3s/64/hash.h | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> v2: Drop change to EA_MASK for now, rewrite change log.
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
> index 5486087e64ea..2781ebf6add4 100644
> --- a/arch/powerpc/include/asm/book3s/64/hash.h
> +++ b/arch/powerpc/include/asm/book3s/64/hash.h
> @@ -93,6 +93,7 @@
>  #define VMALLOC_REGION_ID	NON_LINEAR_REGION_ID(H_VMALLOC_START)
>  #define IO_REGION_ID		NON_LINEAR_REGION_ID(H_KERN_IO_START)
>  #define VMEMMAP_REGION_ID	NON_LINEAR_REGION_ID(H_VMEMMAP_START)
> +#define INVALID_REGION_ID	(VMEMMAP_REGION_ID + 1)
>  
>  /*
>   * Defines the address of the vmemap area, in its own region on
> @@ -119,6 +120,9 @@ static inline int get_region_id(unsigned long ea)
>  	if (id == 0)
>  		return USER_REGION_ID;
>  
> +	if (id != (PAGE_OFFSET >> 60))
> +		return INVALID_REGION_ID;
> +
>  	if (ea < H_KERN_VIRT_START)
>  		return LINEAR_MAP_REGION_ID;
>  
> -- 
> 2.20.1
> 
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] powerpc/mm/hash: Fix get_region_id() for invalid addresses
  2019-05-18  1:56 ` Nicholas Piggin
@ 2019-05-18 10:28   ` Michael Ellerman
  0 siblings, 0 replies; 4+ messages in thread
From: Michael Ellerman @ 2019-05-18 10:28 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: aneesh.kumar

Nicholas Piggin <npiggin@gmail.com> writes:
> Michael Ellerman's on May 17, 2019 11:29 pm:
>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
>> 
>> Accesses by userspace to random addresses outside the user or kernel
>> address range will generate an SLB fault. When we handle that fault we
>> classify the effective address into several classes, eg. user, kernel
>> linear, kernel virtual etc.
>> 
>> For addresses that are completely outside of any valid range, we
>> should not insert an SLB entry at all, and instead immediately an
>> exception.
>> 
>> In the past this was handled in two ways. Firstly we would check the
>> top nibble of the address (using REGION_ID(ea)) and that would tell us
>> if the address was user (0), kernel linear (c), kernel virtual (d), or
>> vmemmap (f). If the address didn't match any of these it was invalid.
>> 
>> Then for each type of address we would do a secondary check. For the
>> user region we check against H_PGTABLE_RANGE, for kernel linear we
>> would mask the top nibble of the address and then check the address
>> against MAX_PHYSMEM_BITS.
>> 
>> As part of commit 0034d395f89d ("powerpc/mm/hash64: Map all the kernel
>> regions in the same 0xc range") we replaced REGION_ID() with
>> get_region_id() and changed the masking of the top nibble to only mask
>> the top two bits, which introduced a bug.
>> 
>> Addresses less than (4 << 60) are still handled correctly, they are
>> either less than (1 << 60) in which case they are subject to the
>> H_PGTABLE_RANGE check, or they are correctly checked against
>> MAX_PHYSMEM_BITS.
>> 
>> However addresses from (4 << 60) to ((0xc << 60) - 1), are incorrectly
>> treated as kernel linear addresses in get_region_id(). Then the top
>> two bits are cleared by EA_MASK in slb_allocate_kernel() and the
>> address is checked against MAX_PHYSMEM_BITS, which it passes due to
>> the masking. The end result is we incorrectly insert SLB entries for
>> those addresses.
>> 
>> That is not actually catastrophic, having inserted the SLB entry we
>> will then go on to take a page fault for the address and at that point
>> we detect the problem and report it as a bad fault.
>> 
>> Still we should not be inserting those entries, or treating them as
>> kernel linear addresses in the first place. So fix get_region_id() to
>> detect addresses in that range and return an invalid region id, which
>> we cause use to not insert an SLB entry and directly report an
>> exception.
>> 
>> Fixes: 0034d395f89d ("powerpc/mm/hash64: Map all the kernel regions in the same 0xc range")
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
>> [mpe: Drop change to EA_MASK for now, rewrite change log]
>> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
>
> Looks good to me.

Thanks for reviewing.

cheers

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] powerpc/mm/hash: Fix get_region_id() for invalid addresses
  2019-05-17 13:29 [PATCH v2] powerpc/mm/hash: Fix get_region_id() for invalid addresses Michael Ellerman
  2019-05-18  1:56 ` Nicholas Piggin
@ 2019-05-21 11:39 ` Michael Ellerman
  1 sibling, 0 replies; 4+ messages in thread
From: Michael Ellerman @ 2019-05-21 11:39 UTC (permalink / raw)
  To: Michael Ellerman, linuxppc-dev; +Cc: aneesh.kumar, npiggin

On Fri, 2019-05-17 at 13:29:58 UTC, Michael Ellerman wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
> 
> Accesses by userspace to random addresses outside the user or kernel
> address range will generate an SLB fault. When we handle that fault we
> classify the effective address into several classes, eg. user, kernel
> linear, kernel virtual etc.
> 
> For addresses that are completely outside of any valid range, we
> should not insert an SLB entry at all, and instead immediately an
> exception.
> 
> In the past this was handled in two ways. Firstly we would check the
> top nibble of the address (using REGION_ID(ea)) and that would tell us
> if the address was user (0), kernel linear (c), kernel virtual (d), or
> vmemmap (f). If the address didn't match any of these it was invalid.
> 
> Then for each type of address we would do a secondary check. For the
> user region we check against H_PGTABLE_RANGE, for kernel linear we
> would mask the top nibble of the address and then check the address
> against MAX_PHYSMEM_BITS.
> 
> As part of commit 0034d395f89d ("powerpc/mm/hash64: Map all the kernel
> regions in the same 0xc range") we replaced REGION_ID() with
> get_region_id() and changed the masking of the top nibble to only mask
> the top two bits, which introduced a bug.
> 
> Addresses less than (4 << 60) are still handled correctly, they are
> either less than (1 << 60) in which case they are subject to the
> H_PGTABLE_RANGE check, or they are correctly checked against
> MAX_PHYSMEM_BITS.
> 
> However addresses from (4 << 60) to ((0xc << 60) - 1), are incorrectly
> treated as kernel linear addresses in get_region_id(). Then the top
> two bits are cleared by EA_MASK in slb_allocate_kernel() and the
> address is checked against MAX_PHYSMEM_BITS, which it passes due to
> the masking. The end result is we incorrectly insert SLB entries for
> those addresses.
> 
> That is not actually catastrophic, having inserted the SLB entry we
> will then go on to take a page fault for the address and at that point
> we detect the problem and report it as a bad fault.
> 
> Still we should not be inserting those entries, or treating them as
> kernel linear addresses in the first place. So fix get_region_id() to
> detect addresses in that range and return an invalid region id, which
> we cause use to not insert an SLB entry and directly report an
> exception.
> 
> Fixes: 0034d395f89d ("powerpc/mm/hash64: Map all the kernel regions in the same 0xc range")
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> [mpe: Drop change to EA_MASK for now, rewrite change log]
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>

Applied to powerpc fixes.

https://git.kernel.org/powerpc/c/c179976cf4cbd2e65f29741d5bc07ccf

cheers

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-05-21 11:41 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-17 13:29 [PATCH v2] powerpc/mm/hash: Fix get_region_id() for invalid addresses Michael Ellerman
2019-05-18  1:56 ` Nicholas Piggin
2019-05-18 10:28   ` Michael Ellerman
2019-05-21 11:39 ` Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).