All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: Marco Elver <elver@google.com>, Dmitry Vyukov <dvyukov@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Andrey Ryabinin <ryabinin.a.a@gmail.com>,
	Andrey Konovalov <andreyknvl@gmail.com>,
	<linux-arm-kernel@lists.infradead.org>,
	<linux-kernel@vger.kernel.org>, <kasan-dev@googlegroups.com>,
	<linux-mm@kvack.org>, Daniel Axtens <dja@axtens.net>
Subject: Re: [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC
Date: Fri, 16 Jul 2021 13:06:32 +0800	[thread overview]
Message-ID: <5f760f6c-dcbd-b28a-2116-a2fb233fc534@huawei.com> (raw)
In-Reply-To: <089f5187-9a4d-72dc-1767-8130434bfb3a@huawei.com>

[-- Attachment #1: Type: text/plain, Size: 3811 bytes --]

Hi Marco and Dmitry, any comments about the following replay, thanks.

On 2021/7/6 12:07, Kefeng Wang wrote:
>
> Hi Marco and Dmitry,
>
> On 2021/7/5 23:04, Marco Elver wrote:
>> On Mon, Jul 05, 2021 at 07:14PM +0800, Kefeng Wang wrote:
>> [...]
>>> +#ifdef CONFIG_KASAN_VMALLOC
>>> +void __init __weak kasan_populate_early_vm_area_shadow(void *start,
>>> +						       unsigned long size)
>> This should probably not be __weak, otherwise you now have 2 __weak
>> functions.
> Indeed, forget it.
>>> +{
>>> +	unsigned long shadow_start, shadow_end;
>>> +
>>> +	if (!is_vmalloc_or_module_addr(start))
>>> +		return;
>>> +
>>> +	shadow_start = (unsigned long)kasan_mem_to_shadow(start);
>>> +	shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE);
>>> +	shadow_end = (unsigned long)kasan_mem_to_shadow(start + size);
>>> +	shadow_end = ALIGN(shadow_end, PAGE_SIZE);
>>> +	kasan_map_populate(shadow_start, shadow_end,
>>> +			   early_pfn_to_nid(virt_to_pfn(start)));
>>> +}
>>> +#endif
>> This function looks quite generic -- would any of this also apply to
>> other architectures? I see that ppc and sparc at least also define
>> CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK.
>
> I can't try ppc/sparc, but only ppc support KASAN_VMALLOC,
>
> I check the x86, it supports CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK,
>
> looks this issue is existing on x86 and ppc.
>
>>>   void __init kasan_init(void)
>>>   {
>>>   	kasan_init_shadow();
>>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>>> index 5310e217bd74..79d3895b0240 100644
>>> --- a/include/linux/kasan.h
>>> +++ b/include/linux/kasan.h
>>> @@ -49,6 +49,8 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
>>>   int kasan_populate_early_shadow(const void *shadow_start,
>>>   				const void *shadow_end);
>>>   
>>> +void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
>>> +
>>>   static inline void *kasan_mem_to_shadow(const void *addr)
>>>   {
>>>   	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
>>> diff --git a/mm/kasan/init.c b/mm/kasan/init.c
>>> index cc64ed6858c6..d39577d088a1 100644
>>> --- a/mm/kasan/init.c
>>> +++ b/mm/kasan/init.c
>>> @@ -279,6 +279,11 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
>>>   	return 0;
>>>   }
>>>   
>>> +void __init __weak kasan_populate_early_vm_area_shadow(void *start,
>>> +						       unsigned long size)
>>> +{
>>> +}
>> I'm just wondering if this could be a generic function, perhaps with an
>> appropriate IS_ENABLED() check of a generic Kconfig option
>> (CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK ?) to short-circuit it, if it's
>> not only an arm64 problem.
>
> kasan_map_populate() is arm64 special function, and the x86 has kasan_shallow_populate_pgds(),
> ppc has kasan_init_shadow_page_tables(), so look those ARCHs should do the same way like ARM64,
>
> Here we can't use kasan_populate_early_shadow(), this functions will make the early shadow maps
> everything to a single page of zeroes(kasan_early_shadow_page), and set it pte_wrprotect, see
> zero_pte_populate(), right?
>
> Also I try this, it crashs on ARM64 when change kasan_map_populate() to kasan_populate_early_shadow(),
>
> Unable to handle kernel write to read-only memory at virtual address ffff700002938000
> ...
> Call trace:
>   __memset+0x16c/0x1c0
>   kasan_unpoison+0x34/0x6c
>   kasan_unpoison_vmalloc+0x2c/0x3c
>   __get_vm_area_node.constprop.0+0x13c/0x240
>   __vmalloc_node_range+0xf4/0x4f0
>   __vmalloc_node+0x80/0x9c
>   init_IRQ+0xe8/0x130
>   start_kernel+0x188/0x360
>   __primary_switched+0xc0/0xc8
>
>
>> But I haven't looked much further, so would appeal to you to either
>> confirm or reject this idea.
>>
>> Thanks,
>> -- Marco
>> .
>>

[-- Attachment #2: Type: text/html, Size: 5174 bytes --]

  reply	other threads:[~2021-07-16  5:07 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-05 11:14 [PATCH -next 0/3] arm64: support page mapping percpu first chunk allocator Kefeng Wang
2021-07-05 11:14 ` Kefeng Wang
2021-07-05 11:14 ` [PATCH -next 1/3] vmalloc: Choose a better start address in vm_area_register_early() Kefeng Wang
2021-07-05 11:14   ` Kefeng Wang
2021-07-05 11:14 ` [PATCH -next 2/3] arm64: Support page mapping percpu first chunk allocator Kefeng Wang
2021-07-05 11:14   ` Kefeng Wang
2021-07-05 11:14 ` [PATCH -next 3/3] kasan: arm64: Fix pcpu_page_first_chunk crash with KASAN_VMALLOC Kefeng Wang
2021-07-05 11:14   ` Kefeng Wang
2021-07-05 14:10   ` kernel test robot
2021-07-05 14:10     ` kernel test robot
2021-07-05 14:10     ` kernel test robot
2021-07-06  4:12     ` Kefeng Wang
2021-07-06  4:12       ` Kefeng Wang
2021-07-06  4:12       ` Kefeng Wang
2021-07-05 15:04   ` Marco Elver
2021-07-05 15:04     ` Marco Elver
2021-07-06  0:04     ` Daniel Axtens
2021-07-06  0:04       ` Daniel Axtens
2021-07-06  0:05       ` Daniel Axtens
2021-07-06  0:05         ` Daniel Axtens
2021-07-06  4:07     ` Kefeng Wang
2021-07-16  5:06       ` Kefeng Wang [this message]
2021-07-16  7:41         ` Marco Elver
2021-07-16  7:41           ` Marco Elver
2021-07-16  7:41           ` Marco Elver
2021-07-17  2:40           ` Kefeng Wang
2021-07-17  2:40             ` Kefeng Wang
2021-07-05 17:15   ` kernel test robot
2021-07-05 17:15     ` kernel test robot
2021-07-05 17:15     ` kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5f760f6c-dcbd-b28a-2116-a2fb233fc534@huawei.com \
    --to=wangkefeng.wang@huawei.com \
    --cc=andreyknvl@gmail.com \
    --cc=catalin.marinas@arm.com \
    --cc=dja@axtens.net \
    --cc=dvyukov@google.com \
    --cc=elver@google.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ryabinin.a.a@gmail.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.