linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] kasan: Fix metadata detection for KASAN_HW_TAGS
@ 2021-01-21 13:19 Vincenzo Frascino
  2021-01-21 13:19 ` [PATCH v2 1/2] arm64: Fix kernel address detection of __is_lm_address() Vincenzo Frascino
  2021-01-21 13:19 ` [PATCH v2 2/2] kasan: Add explicit preconditions to kasan_report() Vincenzo Frascino
  0 siblings, 2 replies; 10+ messages in thread
From: Vincenzo Frascino @ 2021-01-21 13:19 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kasan-dev
  Cc: Vincenzo Frascino, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Leon Romanovsky, Andrey Konovalov,
	Catalin Marinas, Will Deacon

With the introduction of KASAN_HW_TAGS, kasan_report() currently assumes
that every location in memory has valid metadata associated. This is due
to the fact that addr_has_metadata() returns always true.

As a consequence of this, an invalid address (e.g. NULL pointer address)
passed to kasan_report() when KASAN_HW_TAGS is enabled, leads to a
kernel panic.

Example below, based on arm64:

 ==================================================================
 BUG: KASAN: invalid-access in 0x0
 Read at addr 0000000000000000 by task swapper/0/1
 Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000
 Mem abort info:
   ESR = 0x96000004
   EC = 0x25: DABT (current EL), IL = 32 bits
   SET = 0, FnV = 0
   EA = 0, S1PTW = 0
 Data abort info:
   ISV = 0, ISS = 0x00000004
   CM = 0, WnR = 0

...

 Call trace:
  mte_get_mem_tag+0x24/0x40
  kasan_report+0x1a4/0x410
  alsa_sound_last_init+0x8c/0xa4
  do_one_initcall+0x50/0x1b0
  kernel_init_freeable+0x1d4/0x23c
  kernel_init+0x14/0x118
  ret_from_fork+0x10/0x34
 Code: d65f03c0 9000f021 f9428021 b6cfff61 (d9600000)
 ---[ end trace 377c8bb45bdd3a1a ]---
 hrtimer: interrupt took 48694256 ns
 note: swapper/0[1] exited with preempt_count 1
 Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
 SMP: stopping secondary CPUs
 Kernel Offset: 0x35abaf140000 from 0xffff800010000000
 PHYS_OFFSET: 0x40000000
 CPU features: 0x0a7e0152,61c0a030
 Memory Limit: none
 ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---

This series fixes the behavior of addr_has_metadata() that now returns
true only when the address is valid.

Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Leon Romanovsky <leonro@mellanox.com>
Cc: Andrey Konovalov <andreyknvl@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>

Vincenzo Frascino (2):
  arm64: Fix kernel address detection of __is_lm_address()
  kasan: Add explicit preconditions to kasan_report()

 arch/arm64/include/asm/memory.h | 2 +-
 mm/kasan/kasan.h                | 2 +-
 mm/kasan/report.c               | 7 +++++++
 3 files changed, 9 insertions(+), 2 deletions(-)

-- 
2.30.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/2] arm64: Fix kernel address detection of __is_lm_address()
  2021-01-21 13:19 [PATCH v2 0/2] kasan: Fix metadata detection for KASAN_HW_TAGS Vincenzo Frascino
@ 2021-01-21 13:19 ` Vincenzo Frascino
  2021-01-21 15:12   ` Mark Rutland
  2021-01-21 13:19 ` [PATCH v2 2/2] kasan: Add explicit preconditions to kasan_report() Vincenzo Frascino
  1 sibling, 1 reply; 10+ messages in thread
From: Vincenzo Frascino @ 2021-01-21 13:19 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kasan-dev
  Cc: Vincenzo Frascino, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Leon Romanovsky, Andrey Konovalov,
	Catalin Marinas, Will Deacon

Currently, the __is_lm_address() check just masks out the top 12 bits
of the address, but if they are 0, it still yields a true result.
This has as a side effect that virt_addr_valid() returns true even for
invalid virtual addresses (e.g. 0x0).

Fix the detection checking that it's actually a kernel address starting
at PAGE_OFFSET.

Fixes: f4693c2716b35 ("arm64: mm: extend linear region for 52-bit VA configurations")
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
---
 arch/arm64/include/asm/memory.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 18fce223b67b..e04ac898ffe4 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -249,7 +249,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
 /*
  * The linear kernel range starts at the bottom of the virtual address space.
  */
-#define __is_lm_address(addr)	(((u64)(addr) & ~PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))
+#define __is_lm_address(addr)	(((u64)(addr) ^ PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))
 
 #define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
 #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/2] kasan: Add explicit preconditions to kasan_report()
  2021-01-21 13:19 [PATCH v2 0/2] kasan: Fix metadata detection for KASAN_HW_TAGS Vincenzo Frascino
  2021-01-21 13:19 ` [PATCH v2 1/2] arm64: Fix kernel address detection of __is_lm_address() Vincenzo Frascino
@ 2021-01-21 13:19 ` Vincenzo Frascino
  2021-01-21 17:20   ` Andrey Konovalov
  1 sibling, 1 reply; 10+ messages in thread
From: Vincenzo Frascino @ 2021-01-21 13:19 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, kasan-dev
  Cc: Vincenzo Frascino, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Leon Romanovsky, Andrey Konovalov,
	Catalin Marinas, Will Deacon

With the introduction of KASAN_HW_TAGS, kasan_report() dereferences
the address passed as a parameter.

Add a comment to make sure that the preconditions to the function are
explicitly clarified.

Note: An invalid address (e.g. NULL) passed to the function when,
KASAN_HW_TAGS is enabled, leads to a kernel panic.

Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Leon Romanovsky <leonro@mellanox.com>
Cc: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
---
 mm/kasan/kasan.h  | 2 +-
 mm/kasan/report.c | 7 +++++++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index cc4d9e1d49b1..8c706e7652f2 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -209,7 +209,7 @@ bool check_memory_region(unsigned long addr, size_t size, bool write,
 
 static inline bool addr_has_metadata(const void *addr)
 {
-	return true;
+	return (is_vmalloc_addr(addr) || virt_addr_valid(addr));
 }
 
 #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index c0fb21797550..8b690091cb37 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -403,6 +403,13 @@ static void __kasan_report(unsigned long addr, size_t size, bool is_write,
 	end_report(&flags);
 }
 
+/**
+ * kasan_report - report kasan fault details
+ * @addr: valid address of the allocation where the tag fault was detected
+ * @size: size of the allocation where the tag fault was detected
+ * @is_write: the instruction that caused the fault was a read or write?
+ * @ip: pointer to the instruction that cause the fault
+ */
 bool kasan_report(unsigned long addr, size_t size, bool is_write,
 			unsigned long ip)
 {
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] arm64: Fix kernel address detection of __is_lm_address()
  2021-01-21 13:19 ` [PATCH v2 1/2] arm64: Fix kernel address detection of __is_lm_address() Vincenzo Frascino
@ 2021-01-21 15:12   ` Mark Rutland
  2021-01-21 15:30     ` Vincenzo Frascino
  0 siblings, 1 reply; 10+ messages in thread
From: Mark Rutland @ 2021-01-21 15:12 UTC (permalink / raw)
  To: Vincenzo Frascino
  Cc: linux-arm-kernel, linux-kernel, kasan-dev, Andrey Konovalov,
	Leon Romanovsky, Alexander Potapenko, Catalin Marinas,
	Andrey Ryabinin, Will Deacon, Dmitry Vyukov, Ard Biesheuvel

[adding Ard]

On Thu, Jan 21, 2021 at 01:19:55PM +0000, Vincenzo Frascino wrote:
> Currently, the __is_lm_address() check just masks out the top 12 bits
> of the address, but if they are 0, it still yields a true result.
> This has as a side effect that virt_addr_valid() returns true even for
> invalid virtual addresses (e.g. 0x0).

When it was added, __is_lm_address() was intended to distinguish valid
kernel virtual addresses (i.e. those in the TTBR1 address range), and
wasn't intended to do anything for addresses outside of this range. See
commit:

  ec6d06efb0bac6cd ("arm64: Add support for CONFIG_DEBUG_VIRTUAL")

... where it simply tests a bit.

So I believe that it's working as intended (though this is poorly
documented), but I think you're saying that usage isn't aligned with
that intent. Given that, I'm not sure the fixes tag is right; I think it
has never had the semantic you're after.

I had thought the same was true for virt_addr_valid(), and that wasn't
expected to be called for VAs outside of the kernel VA range. Is it
actually safe to call that with NULL on other architectures?

I wonder if it's worth virt_addr_valid() having an explicit check for
the kernel VA range, instead.

> Fix the detection checking that it's actually a kernel address starting
> at PAGE_OFFSET.
> 
> Fixes: f4693c2716b35 ("arm64: mm: extend linear region for 52-bit VA configurations")
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
> ---
>  arch/arm64/include/asm/memory.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 18fce223b67b..e04ac898ffe4 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -249,7 +249,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
>  /*
>   * The linear kernel range starts at the bottom of the virtual address space.
>   */
> -#define __is_lm_address(addr)	(((u64)(addr) & ~PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))
> +#define __is_lm_address(addr)	(((u64)(addr) ^ PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))

If we're going to make this stronger, can we please expand the comment
with the intended semantic? Otherwise we're liable to break this in
future.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] arm64: Fix kernel address detection of __is_lm_address()
  2021-01-21 15:12   ` Mark Rutland
@ 2021-01-21 15:30     ` Vincenzo Frascino
  2021-01-21 15:49       ` Mark Rutland
  0 siblings, 1 reply; 10+ messages in thread
From: Vincenzo Frascino @ 2021-01-21 15:30 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, kasan-dev, Andrey Konovalov,
	Leon Romanovsky, Alexander Potapenko, Catalin Marinas,
	Andrey Ryabinin, Will Deacon, Dmitry Vyukov, Ard Biesheuvel



On 1/21/21 3:12 PM, Mark Rutland wrote:
> [adding Ard]
>

Thanks for this, it is related to his patch and I forgot to Cc: him directly.

> On Thu, Jan 21, 2021 at 01:19:55PM +0000, Vincenzo Frascino wrote:
>> Currently, the __is_lm_address() check just masks out the top 12 bits
>> of the address, but if they are 0, it still yields a true result.
>> This has as a side effect that virt_addr_valid() returns true even for
>> invalid virtual addresses (e.g. 0x0).
> 
> When it was added, __is_lm_address() was intended to distinguish valid
> kernel virtual addresses (i.e. those in the TTBR1 address range), and
> wasn't intended to do anything for addresses outside of this range. See
> commit:
> 
>   ec6d06efb0bac6cd ("arm64: Add support for CONFIG_DEBUG_VIRTUAL")
> 
> ... where it simply tests a bit.
> 
> So I believe that it's working as intended (though this is poorly
> documented), but I think you're saying that usage isn't aligned with
> that intent. Given that, I'm not sure the fixes tag is right; I think it
> has never had the semantic you're after.
>

I did not do much thinking on the intended semantics. I based my interpretation
on what you are saying (the usage is not aligned with the intent). Based on what
you are are saying, I will change the patch description removing the "Fix" term.

> I had thought the same was true for virt_addr_valid(), and that wasn't
> expected to be called for VAs outside of the kernel VA range. Is it
> actually safe to call that with NULL on other architectures?
> 

I am not sure on this, did not do any testing outside of arm64.

> I wonder if it's worth virt_addr_valid() having an explicit check for
> the kernel VA range, instead.
> 

I have no strong opinion either way even if personally I feel that modifying
__is_lm_address() is more clear. Feel free to propose something.

>> Fix the detection checking that it's actually a kernel address starting
>> at PAGE_OFFSET.
>>
>> Fixes: f4693c2716b35 ("arm64: mm: extend linear region for 52-bit VA configurations")
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will@kernel.org>
>> Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
>> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
>> ---
>>  arch/arm64/include/asm/memory.h | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>> index 18fce223b67b..e04ac898ffe4 100644
>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -249,7 +249,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
>>  /*
>>   * The linear kernel range starts at the bottom of the virtual address space.
>>   */
>> -#define __is_lm_address(addr)	(((u64)(addr) & ~PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))
>> +#define __is_lm_address(addr)	(((u64)(addr) ^ PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))
> 
> If we're going to make this stronger, can we please expand the comment
> with the intended semantic? Otherwise we're liable to break this in
> future.
> 

Based on your reply on the above matter, if you agree, I am happy to extend the
comment.

> Thanks,
> Mark.
> 

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] arm64: Fix kernel address detection of __is_lm_address()
  2021-01-21 15:30     ` Vincenzo Frascino
@ 2021-01-21 15:49       ` Mark Rutland
  2021-01-21 16:02         ` Vincenzo Frascino
  0 siblings, 1 reply; 10+ messages in thread
From: Mark Rutland @ 2021-01-21 15:49 UTC (permalink / raw)
  To: Vincenzo Frascino
  Cc: linux-arm-kernel, linux-kernel, kasan-dev, Andrey Konovalov,
	Leon Romanovsky, Alexander Potapenko, Catalin Marinas,
	Andrey Ryabinin, Will Deacon, Dmitry Vyukov, Ard Biesheuvel

On Thu, Jan 21, 2021 at 03:30:51PM +0000, Vincenzo Frascino wrote:
> On 1/21/21 3:12 PM, Mark Rutland wrote:
> > On Thu, Jan 21, 2021 at 01:19:55PM +0000, Vincenzo Frascino wrote:
> >> Currently, the __is_lm_address() check just masks out the top 12 bits
> >> of the address, but if they are 0, it still yields a true result.
> >> This has as a side effect that virt_addr_valid() returns true even for
> >> invalid virtual addresses (e.g. 0x0).
> > 
> > When it was added, __is_lm_address() was intended to distinguish valid
> > kernel virtual addresses (i.e. those in the TTBR1 address range), and
> > wasn't intended to do anything for addresses outside of this range. See
> > commit:
> > 
> >   ec6d06efb0bac6cd ("arm64: Add support for CONFIG_DEBUG_VIRTUAL")
> > 
> > ... where it simply tests a bit.
> > 
> > So I believe that it's working as intended (though this is poorly
> > documented), but I think you're saying that usage isn't aligned with
> > that intent. Given that, I'm not sure the fixes tag is right; I think it
> > has never had the semantic you're after.
> >
> I did not do much thinking on the intended semantics. I based my interpretation
> on what you are saying (the usage is not aligned with the intent). Based on what
> you are are saying, I will change the patch description removing the "Fix" term.

Thanks! I assume that also means removing the fixes tag.

> > I had thought the same was true for virt_addr_valid(), and that wasn't
> > expected to be called for VAs outside of the kernel VA range. Is it
> > actually safe to call that with NULL on other architectures?
> 
> I am not sure on this, did not do any testing outside of arm64.

I think it'd be worth checking, if we're going to use this in common
code.

> > I wonder if it's worth virt_addr_valid() having an explicit check for
> > the kernel VA range, instead.
> 
> I have no strong opinion either way even if personally I feel that modifying
> __is_lm_address() is more clear. Feel free to propose something.

Sure; I'm happy for it to live within __is_lm_address() if that's
simpler overall, given it doesn't look like it's making that more
complex or expensive.

> >> Fix the detection checking that it's actually a kernel address starting
> >> at PAGE_OFFSET.
> >>
> >> Fixes: f4693c2716b35 ("arm64: mm: extend linear region for 52-bit VA configurations")
> >> Cc: Catalin Marinas <catalin.marinas@arm.com>
> >> Cc: Will Deacon <will@kernel.org>
> >> Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
> >> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
> >> ---
> >>  arch/arm64/include/asm/memory.h | 2 +-
> >>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> >> index 18fce223b67b..e04ac898ffe4 100644
> >> --- a/arch/arm64/include/asm/memory.h
> >> +++ b/arch/arm64/include/asm/memory.h
> >> @@ -249,7 +249,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
> >>  /*
> >>   * The linear kernel range starts at the bottom of the virtual address space.
> >>   */
> >> -#define __is_lm_address(addr)	(((u64)(addr) & ~PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))
> >> +#define __is_lm_address(addr)	(((u64)(addr) ^ PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))
> > 
> > If we're going to make this stronger, can we please expand the comment
> > with the intended semantic? Otherwise we're liable to break this in
> > future.
> 
> Based on your reply on the above matter, if you agree, I am happy to extend the
> comment.

Works for me; how about:

/*
 * Check whether an arbitrary address is within the linear map, which
 * lives in the [PAGE_OFFSET, PAGE_END) interval at the bottom of the
 * kernel's TTBR1 address range.
 */

... with "arbitrary" being the key word.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] arm64: Fix kernel address detection of __is_lm_address()
  2021-01-21 15:49       ` Mark Rutland
@ 2021-01-21 16:02         ` Vincenzo Frascino
  2021-01-21 17:43           ` Vincenzo Frascino
  0 siblings, 1 reply; 10+ messages in thread
From: Vincenzo Frascino @ 2021-01-21 16:02 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, kasan-dev, Andrey Konovalov,
	Leon Romanovsky, Alexander Potapenko, Catalin Marinas,
	Andrey Ryabinin, Will Deacon, Dmitry Vyukov, Ard Biesheuvel



On 1/21/21 3:49 PM, Mark Rutland wrote:
> On Thu, Jan 21, 2021 at 03:30:51PM +0000, Vincenzo Frascino wrote:
>> On 1/21/21 3:12 PM, Mark Rutland wrote:
>>> On Thu, Jan 21, 2021 at 01:19:55PM +0000, Vincenzo Frascino wrote:
>>>> Currently, the __is_lm_address() check just masks out the top 12 bits
>>>> of the address, but if they are 0, it still yields a true result.
>>>> This has as a side effect that virt_addr_valid() returns true even for
>>>> invalid virtual addresses (e.g. 0x0).
>>>
>>> When it was added, __is_lm_address() was intended to distinguish valid
>>> kernel virtual addresses (i.e. those in the TTBR1 address range), and
>>> wasn't intended to do anything for addresses outside of this range. See
>>> commit:
>>>
>>>   ec6d06efb0bac6cd ("arm64: Add support for CONFIG_DEBUG_VIRTUAL")
>>>
>>> ... where it simply tests a bit.
>>>
>>> So I believe that it's working as intended (though this is poorly
>>> documented), but I think you're saying that usage isn't aligned with
>>> that intent. Given that, I'm not sure the fixes tag is right; I think it
>>> has never had the semantic you're after.
>>>
>> I did not do much thinking on the intended semantics. I based my interpretation
>> on what you are saying (the usage is not aligned with the intent). Based on what
>> you are are saying, I will change the patch description removing the "Fix" term.
> 
> Thanks! I assume that also means removing the fixes tag.
>

Obviously ;)

>>> I had thought the same was true for virt_addr_valid(), and that wasn't
>>> expected to be called for VAs outside of the kernel VA range. Is it
>>> actually safe to call that with NULL on other architectures?
>>
>> I am not sure on this, did not do any testing outside of arm64.
> 
> I think it'd be worth checking, if we're going to use this in common
> code.
> 

Ok, I will run some tests and let you know.

>>> I wonder if it's worth virt_addr_valid() having an explicit check for
>>> the kernel VA range, instead.
>>
>> I have no strong opinion either way even if personally I feel that modifying
>> __is_lm_address() is more clear. Feel free to propose something.
> 
> Sure; I'm happy for it to live within __is_lm_address() if that's
> simpler overall, given it doesn't look like it's making that more
> complex or expensive.
> 
>>>> Fix the detection checking that it's actually a kernel address starting
>>>> at PAGE_OFFSET.
>>>>
>>>> Fixes: f4693c2716b35 ("arm64: mm: extend linear region for 52-bit VA configurations")
>>>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>>>> Cc: Will Deacon <will@kernel.org>
>>>> Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
>>>> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
>>>> ---
>>>>  arch/arm64/include/asm/memory.h | 2 +-
>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>>>> index 18fce223b67b..e04ac898ffe4 100644
>>>> --- a/arch/arm64/include/asm/memory.h
>>>> +++ b/arch/arm64/include/asm/memory.h
>>>> @@ -249,7 +249,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
>>>>  /*
>>>>   * The linear kernel range starts at the bottom of the virtual address space.
>>>>   */
>>>> -#define __is_lm_address(addr)	(((u64)(addr) & ~PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))
>>>> +#define __is_lm_address(addr)	(((u64)(addr) ^ PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))
>>>
>>> If we're going to make this stronger, can we please expand the comment
>>> with the intended semantic? Otherwise we're liable to break this in
>>> future.
>>
>> Based on your reply on the above matter, if you agree, I am happy to extend the
>> comment.
> 
> Works for me; how about:
> 
> /*
>  * Check whether an arbitrary address is within the linear map, which
>  * lives in the [PAGE_OFFSET, PAGE_END) interval at the bottom of the
>  * kernel's TTBR1 address range.
>  */
> 
> ... with "arbitrary" being the key word.
> 

Sounds good to me! I will post the new version after confirming the behavior of
virt_addr_valid() on the other architectures.

> Thanks,
> Mark.
> 

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/2] kasan: Add explicit preconditions to kasan_report()
  2021-01-21 13:19 ` [PATCH v2 2/2] kasan: Add explicit preconditions to kasan_report() Vincenzo Frascino
@ 2021-01-21 17:20   ` Andrey Konovalov
  2021-01-22 14:32     ` Vincenzo Frascino
  0 siblings, 1 reply; 10+ messages in thread
From: Andrey Konovalov @ 2021-01-21 17:20 UTC (permalink / raw)
  To: Vincenzo Frascino
  Cc: Linux ARM, LKML, kasan-dev, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Leon Romanovsky, Catalin Marinas, Will Deacon

On Thu, Jan 21, 2021 at 2:20 PM Vincenzo Frascino
<vincenzo.frascino@arm.com> wrote:
>
> With the introduction of KASAN_HW_TAGS, kasan_report() dereferences
> the address passed as a parameter.
>
> Add a comment to make sure that the preconditions to the function are
> explicitly clarified.
>
> Note: An invalid address (e.g. NULL) passed to the function when,
> KASAN_HW_TAGS is enabled, leads to a kernel panic.
>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Cc: Leon Romanovsky <leonro@mellanox.com>
> Cc: Andrey Konovalov <andreyknvl@google.com>
> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
> ---
>  mm/kasan/kasan.h  | 2 +-
>  mm/kasan/report.c | 7 +++++++
>  2 files changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index cc4d9e1d49b1..8c706e7652f2 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -209,7 +209,7 @@ bool check_memory_region(unsigned long addr, size_t size, bool write,
>
>  static inline bool addr_has_metadata(const void *addr)
>  {
> -       return true;
> +       return (is_vmalloc_addr(addr) || virt_addr_valid(addr));
>  }
>
>  #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index c0fb21797550..8b690091cb37 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -403,6 +403,13 @@ static void __kasan_report(unsigned long addr, size_t size, bool is_write,
>         end_report(&flags);
>  }
>
> +/**
> + * kasan_report - report kasan fault details

print a report about a bad memory access detected by KASAN

> + * @addr: valid address of the allocation where the tag fault was detected

address of the bad access

> + * @size: size of the allocation where the tag fault was detected

size of the bad access

> + * @is_write: the instruction that caused the fault was a read or write?

whether the bad access is a write or a read

(no question mark at the end)

> + * @ip: pointer to the instruction that cause the fault

instruction pointer for the accessibility check or the bad access itself

> + */

And please move this to include/kasan/kasan.h.

>  bool kasan_report(unsigned long addr, size_t size, bool is_write,
>                         unsigned long ip)
>  {
> --
> 2.30.0
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] arm64: Fix kernel address detection of __is_lm_address()
  2021-01-21 16:02         ` Vincenzo Frascino
@ 2021-01-21 17:43           ` Vincenzo Frascino
  0 siblings, 0 replies; 10+ messages in thread
From: Vincenzo Frascino @ 2021-01-21 17:43 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, kasan-dev, Andrey Konovalov,
	Leon Romanovsky, Alexander Potapenko, Catalin Marinas,
	Andrey Ryabinin, Will Deacon, Dmitry Vyukov, Ard Biesheuvel


On 1/21/21 4:02 PM, Vincenzo Frascino wrote:
>> I think it'd be worth checking, if we're going to use this in common
>> code.
>>
> Ok, I will run some tests and let you know.
> 

I checked on x86_64 and ppc64 (they both have KASAN implementation):

I added the following:

printk("%s: %d\n", __func__, virt_addr_valid(0));

in x86_64: sounds/last.c
in pp64: arch/powerpc/kernel/setup-common.c

and in both the cases the output is 0 (false) when the same in arm64 is 1
(true). Therefore I think we should proceed with the change.

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/2] kasan: Add explicit preconditions to kasan_report()
  2021-01-21 17:20   ` Andrey Konovalov
@ 2021-01-22 14:32     ` Vincenzo Frascino
  0 siblings, 0 replies; 10+ messages in thread
From: Vincenzo Frascino @ 2021-01-22 14:32 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Linux ARM, LKML, kasan-dev, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Leon Romanovsky, Catalin Marinas, Will Deacon

Hi Andrey,

All done. Reposting shortly. Thank you!

On 1/21/21 5:20 PM, Andrey Konovalov wrote:
> And please move this to include/kasan/kasan.h.

I guess you meant include/linux/kasan.h.

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-01-22 14:34 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-21 13:19 [PATCH v2 0/2] kasan: Fix metadata detection for KASAN_HW_TAGS Vincenzo Frascino
2021-01-21 13:19 ` [PATCH v2 1/2] arm64: Fix kernel address detection of __is_lm_address() Vincenzo Frascino
2021-01-21 15:12   ` Mark Rutland
2021-01-21 15:30     ` Vincenzo Frascino
2021-01-21 15:49       ` Mark Rutland
2021-01-21 16:02         ` Vincenzo Frascino
2021-01-21 17:43           ` Vincenzo Frascino
2021-01-21 13:19 ` [PATCH v2 2/2] kasan: Add explicit preconditions to kasan_report() Vincenzo Frascino
2021-01-21 17:20   ` Andrey Konovalov
2021-01-22 14:32     ` Vincenzo Frascino

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).