From: liuwenliang@huawei.com (Liuwenliang (Abbott Liu))
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH 06/11] change memory_is_poisoned_16 for aligned error
Date: Tue, 5 Dec 2017 14:19:07 +0000 [thread overview]
Message-ID: <B8AC3E80E903784988AB3003E3E97330C006EF9B@dggemm510-mbs.china.huawei.com> (raw)
In-Reply-To: <20171019125133.GA20805@n2100.armlinux.org.uk>
On Nov 23, 2017 20:30 Russell King - ARM Linux [mailto:linux at armlinux.org.uk] wrote:
>On Thu, Oct 12, 2017 at 11:27:40AM +0000, Liuwenliang (Lamb) wrote:
>> >> - I don't understand why this is necessary. memory_is_poisoned_16()
>> >> already handles unaligned addresses?
>> >>
>> >> - If it's needed on ARM then presumably it will be needed on other
>> >> architectures, so CONFIG_ARM is insufficiently general.
>> >>
>> >> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
>> >> it would be better to generalize/fix it in some fashion rather than
>> >> creating a new variant of the function.
>>
>>
>> >Yes, I think it will be better to fix the current function rather then
>> >have 2 slightly different copies with ifdef's.
>> >Will something along these lines work for arm? 16-byte accesses are
>> >not too common, so it should not be a performance problem. And
>> >probably modern compilers can turn 2 1-byte checks into a 2-byte check
>> >where safe (x86).
>>
>> >static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>> >{
>> > u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>> >
>> > if (shadow_addr[0] || shadow_addr[1])
>> > return true;
>> > /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>> > if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>> > return memory_is_poisoned_1(addr + 15);
>> > return false;
>> >}
>>
>> Thanks for Andrew Morton and Dmitry Vyukov's review.
>> If the parameter addr=0xc0000008, now in function:
>> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>> {
>> --- //shadow_addr = (u16 *)(KASAN_OFFSET+0x18000001(=0xc0000008>>3)) is not
>> --- // unsigned by 2 bytes.
>> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>>
>> /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>> if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>> return *shadow_addr || memory_is_poisoned_1(addr + 15);
>> ---- //here is going to be error on arm, specially when kernel has not finished yet.
>> ---- //Because the unsigned accessing cause DataAbort Exception which is not
>> ---- //initialized when kernel is starting.
>> return *shadow_addr;
>> }
>>
>> I also think it is better to fix this problem.
>What about using get_unaligned() ?
Thanks for your review.
I think it is good idea to use get_unaligned. But ARMv7 support CONFIG_ HAVE_EFFICIENT_UNALIGNED_ACCESS
(arch/arm/Kconfig : select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU).
So on ARMv7, the code:
u16 *shadow_addr = get_unaligned((u16 *)kasan_mem_to_shadow((void *)addr));
equals the code:000
u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
On ARMv7, if SCRLR.A is 0, unaligned access is OK. Here is the description comes from ARM(r) Architecture Reference
Manual ARMv7-A and ARMv7-R edition :
A3.2.1 Unaligned data access
An ARMv7 implementation must support unaligned data accesses by some load and store instructions, as
Table A3-1 shows. Software can set the SCTLR.A bit to control whether a misaligned access by one of these
instructions causes an Alignment fault Data abort exception.
Table A3-1 Alignment requirements of load/store instructions
Instructions Alignment check SCTLR.A is 0 SCTLR.A is 1
LDRB, LDREXB, LDRBT,
LDRSB, LDRSBT, STRB, None - -
STREXB, STRBT, SWPB,
TBB
LDRH, LDRHT, LDRSH,
LDRSHT, STRH, STRHT, Halfword Unaligned access Alignment fault
TBH
LDREXH, STREXH Halfword Alignment fault Alignment fault
LDR, LDRT, STR, STRT
PUSH, encodings T3 and A2 only Word Unaligned access Alignment fault
POP, encodings T3 and A2 only
LDREX, STREX Word Alignment fault Alignment fault
LDREXD, STREXD Doubleword Alignment fault Alignment fault
All forms of LDM and STM,
LDRD, RFE, SRS, STRD, SWP
PUSH, except for encodings
T3 and A2 Word Alignment fault Alignment fault
POP, except for encodings
T3 and A2
LDC, LDC2, STC, STC2 Word Alignment fault Alignment fault
VLDM, VLDR, VPOP,
VPUSH, VSTM, VSTR Word Alignment fault Alignment fault
VLD1, VLD2, VLD3, VLD4,
VST1, VST2, VST3, VST4, Element size Unaligned access Alignment fault
all with standard alignmenta
VLD1, VLD2, VLD3, VLD4,
VST1, VST2, VST3, VST4, As specified by@<align> Alignment fault Alignment fault
all with @<align> specifieda
On ARMv7, the following code can guarantee that if SCRLR.A is 0:
__enable_mmu:
#if defined(CONFIG_ALIGNMENT_TRAP) && __LINUX_ARM_ARCH__ < 6
orr r0, r0, #CR_A
#else
bic r0, r0, #CR_A //clear CR_A
#endif
#ifdef CONFIG_CPU_DCACHE_DISABLE
bic r0, r0, #CR_C
#endif
#ifdef CONFIG_CPU_BPREDICT_DISABLE
bic r0, r0, #CR_Z
#endif
#ifdef CONFIG_CPU_ICACHE_DISABLE
bic r0, r0, #CR_I
#endif
#ifdef CONFIG_ARM_LPAE
mcrr p15, 0, r4, r5, c2 @ load TTBR0
#else
mov r5, #DACR_INIT
mcr p15, 0, r5, c3, c0, 0 @ load domain access register
mcr p15, 0, r4, c2, c0, 0 @ load page table pointer
#endif
b __turn_mmu_on
ENDPROC(__enable_mmu)
/*
* Enable the MMU. This completely changes the structure of the visible
* memory space. You will not be able to trace execution through this.
* If you have an enquiry about this, *please* check the linux-arm-kernel
* mailing list archives BEFORE sending another post to the list.
*
* r0 = cp#15 control register
* r1 = machine ID
* r2 = atags or dtb pointer
* r9 = processor ID
* r13 = *virtual* address to jump to upon completion
*
* other registers depend on the function called upon completion
*/
.align 5
.pushsection .idmap.text, "ax"
ENTRY(__turn_mmu_on)
mov r0, r0
instr_sync
mcr p15, 0, r0, c1, c0, 0 @ write control reg //here set SCTLR=r0
mrc p15, 0, r3, c0, c0, 0 @ read id reg
instr_sync
mov r3, r3
mov r3, r13
ret r3
__turn_mmu_on_end:
ENDPROC(__turn_mmu_on)
So the following code is OK:
static __always_inline bool memory_is_poisoned_16(unsigned long addr)
{
- u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
+ u16 *shadow_addr = get_unaligned( (u16 *)kasan_mem_to_shadow((void *)addr));
/* Unaligned 16-bytes access maps into 3 shadow bytes. */
if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
return *shadow_addr || memory_is_poisoned_1(addr + 15);
return *shadow_addr;
}
A very good suggestion, Thanks.
next prev parent reply other threads:[~2017-12-05 14:19 UTC|newest]
Thread overview: 85+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-11 8:22 [PATCH 00/11] KASan for arm Abbott Liu
2017-10-11 8:22 ` [PATCH 01/11] Initialize the mapping of KASan shadow memory Abbott Liu
2017-10-11 19:39 ` Florian Fainelli
2017-10-11 21:41 ` Russell King - ARM Linux
2017-10-17 13:28 ` Liuwenliang (Lamb)
2017-10-11 23:42 ` Dmitry Osipenko
2017-10-19 6:52 ` Liuwenliang (Lamb)
2017-10-19 12:01 ` Russell King - ARM Linux
2018-02-26 13:09 ` 答复: " Liuwenliang (Abbott Liu)
2017-10-12 7:58 ` Marc Zyngier
2017-11-09 7:46 ` Liuwenliang (Abbott Liu)
2017-11-09 10:10 ` Marc Zyngier
2017-11-15 10:20 ` Liuwenliang (Abbott Liu)
2017-11-15 10:35 ` Marc Zyngier
2017-11-15 13:16 ` Liuwenliang (Abbott Liu)
2017-11-15 13:54 ` Marc Zyngier
2017-11-16 3:07 ` Liuwenliang (Abbott Liu)
2017-11-16 9:54 ` Marc Zyngier
2017-11-16 14:24 ` Liuwenliang (Abbott Liu)
2017-11-16 14:40 ` Marc Zyngier
2017-11-17 1:39 ` 答复: " Liuwenliang (Abbott Liu)
2017-11-17 7:18 ` Liuwenliang (Abbott Liu)
2017-11-17 7:35 ` Christoffer Dall
2017-11-18 10:40 ` Liuwenliang (Abbott Liu)
2017-11-18 13:48 ` Marc Zyngier
2017-11-21 7:59 ` 答复: " Liuwenliang (Abbott Liu)
2017-11-21 9:40 ` Russell King - ARM Linux
2017-11-21 9:46 ` Marc Zyngier
2017-11-21 12:29 ` Mark Rutland
2017-11-22 12:56 ` Liuwenliang (Abbott Liu)
2017-11-22 13:06 ` Marc Zyngier
2017-11-23 1:54 ` Liuwenliang (Abbott Liu)
2017-11-23 15:22 ` Russell King - ARM Linux
2017-11-27 1:23 ` Liuwenliang (Abbott Liu)
2017-11-23 15:31 ` Mark Rutland
2017-11-27 1:26 ` 答复: " Liuwenliang (Abbott Liu)
2017-10-19 11:09 ` Russell King - ARM Linux
2018-02-24 14:28 ` Liuwenliang (Abbott Liu)
2017-10-11 8:22 ` [PATCH 02/11] replace memory function Abbott Liu
2017-10-19 12:05 ` Russell King - ARM Linux
2017-10-22 12:42 ` 答复: " Liuwenliang (Lamb)
2017-10-11 8:22 ` [PATCH 03/11] arm: Kconfig: enable KASan Abbott Liu
2017-10-11 19:15 ` Florian Fainelli
2017-10-19 12:34 ` Russell King - ARM Linux
2017-10-22 12:27 ` Liuwenliang (Lamb)
2017-10-11 8:22 ` [PATCH 04/11] Define the virtual space of KASan's shadow region Abbott Liu
2017-10-14 11:41 ` kbuild test robot
2017-10-16 11:42 ` Liuwenliang (Lamb)
2017-10-16 12:14 ` Ard Biesheuvel
2017-10-17 11:27 ` Liuwenliang (Lamb)
2017-10-17 11:52 ` Ard Biesheuvel
2017-10-17 13:02 ` Liuwenliang (Lamb)
2017-10-19 12:43 ` Russell King - ARM Linux
2017-10-22 12:12 ` Liuwenliang (Lamb)
2017-10-19 12:41 ` Russell King - ARM Linux
2017-10-19 12:40 ` Russell King - ARM Linux
2017-10-11 8:22 ` [PATCH 05/11] Disable kasan's instrumentation Abbott Liu
2017-10-11 19:16 ` Florian Fainelli
2017-10-19 12:47 ` Russell King - ARM Linux
2017-11-15 10:19 ` Liuwenliang (Abbott Liu)
2017-10-11 8:22 ` [PATCH 06/11] change memory_is_poisoned_16 for aligned error Abbott Liu
2017-10-11 23:23 ` Andrew Morton
2017-10-12 7:16 ` Dmitry Vyukov
2017-10-12 11:27 ` Liuwenliang (Lamb)
2017-10-19 12:51 ` Russell King - ARM Linux
2017-12-05 14:19 ` Liuwenliang (Abbott Liu) [this message]
2017-12-05 17:08 ` Ard Biesheuvel
2017-10-11 8:22 ` [PATCH 07/11] Avoid cleaning the KASan shadow area's mapping table Abbott Liu
2017-10-11 8:22 ` [PATCH 08/11] Add support arm LPAE Abbott Liu
2017-10-11 8:22 ` [PATCH 09/11] Don't need to map the shadow of KASan's shadow memory Abbott Liu
2017-10-19 12:55 ` Russell King - ARM Linux
2017-10-22 12:31 ` Liuwenliang (Lamb)
2017-10-11 8:22 ` [PATCH 10/11] Change mapping of kasan_zero_page int readonly Abbott Liu
2017-10-11 19:19 ` Florian Fainelli
2017-10-11 8:22 ` [PATCH 11/11] Add KASan layout Abbott Liu
2017-10-11 19:13 ` [PATCH 00/11] KASan for arm Florian Fainelli
2017-10-11 19:50 ` Florian Fainelli
[not found] ` <44c86924-930b-3eff-55b8-b02c9060ebe3@gmail.com>
2017-10-11 22:10 ` Laura Abbott
2017-10-11 22:58 ` Russell King - ARM Linux
2017-10-17 12:41 ` Liuwenliang (Lamb)
2017-10-12 4:55 ` Liuwenliang (Lamb)
2017-10-12 7:38 ` Arnd Bergmann
2017-10-17 1:04 ` 答复: " Liuwenliang (Lamb)
2018-02-13 18:40 ` Florian Fainelli
2018-02-23 2:10 ` Liuwenliang (Abbott Liu)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=B8AC3E80E903784988AB3003E3E97330C006EF9B@dggemm510-mbs.china.huawei.com \
--to=liuwenliang@huawei.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).