* [PATCH v7 0/7] KASan for arm
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Florian Fainelli, bcm-kernel-feedback-list, glider, dvyukov,
corbet, linux, christoffer.dall, marc.zyngier, arnd, nico,
vladimir.murzin, keescook, jinb.park7, alexandre.belloni,
ard.biesheuvel, daniel.lezcano, pombredanne, liuwenliang, rob,
gregkh, akpm, mark.rutland, catalin.marinas, yamada.masahiro,
tglx, thgarnie, dhowells, geert, andre.przywara, julien.thierry,
drjones, philip, mhocko, kirill.shutemov, kasan-dev, linux-doc,
linux-kernel, kvmarm, ryabinin.a.a
Hi all,
Abbott submitted a v5 about a year ago here:
and the series was not picked up since then, so I rebased it against
v5.2-rc4 and re-tested it on a Brahma-B53 (ARMv8 running AArch32 mode)
and Brahma-B15, both LPAE and test-kasan is consistent with the ARM64
counter part.
We were in a fairly good shape last time with a few different people
having tested it, so I am hoping we can get that included for 5.4 if
everything goes well.
Changelog:
v7 - v7
- add Linus' Tested-by for the following platforms:
Tested systems:
QEMU ARM RealView PBA8
QEMU ARM RealView PBX A9
QEMU ARM Versatile AB
Hardware Integrator CP
Hardware Versatile AB with IB2
- define CONFIG_KASAN_SHADOW_OFFSET
v6 - v5
- Resolve conflicts during rebase, and updated to make use of
kasan_early_shadow_pte instead of kasan_zero_pte
v5 - v4
- Modify Andrey Ryabinin's email address.
v4 - v3
- Remove the fix of type conversion in kasan_cache_create because it has
been fix in the latest version in:
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
- Change some Reviewed-by tag into Reported-by tag to avoid misleading.
---Reported by: Marc Zyngier <marc.zyngier@arm.com>
Russell King - ARM Linux <linux@armlinux.org.uk>
- Disable instrumentation for arch/arm/mm/physaddr.c
v3 - v2
- Remove this patch: 2 1-byte checks more safer for memory_is_poisoned_16
because a unaligned load/store of 16 bytes is rare on arm, and this
patch is very likely to affect the performance of modern CPUs.
---Acked by: Russell King - ARM Linux <linux@armlinux.org.uk>
- Fixed some link error which kasan_pmd_populate,kasan_pte_populate and
kasan_pud_populate are in section .meminit.text but the function
kasan_alloc_block which is called by kasan_pmd_populate,
kasan_pte_populate and kasan_pud_populate is in section .init.text. So
we need change kasan_pmd_populate,kasan_pte_populate and
kasan_pud_populate into the section .init.text.
---Reported by: Florian Fainelli <f.fainelli@gmail.com>
- Fixed some compile error which caused by the wrong access instruction in
arch/arm/kernel/entry-common.S.
---Reported by: kbuild test robot <lkp@intel.com>
- Disable instrumentation for arch/arm/kvm/hyp/*.
---Acked by: Marc Zyngier <marc.zyngier@arm.com>
- Update the set of supported architectures in
Documentation/dev-tools/kasan.rst.
---Acked by:Dmitry Vyukov <dvyukov@google.com>
- The version 2 is tested by:
Florian Fainelli <f.fainelli@gmail.com> (compile test)
kbuild test robot <lkp@intel.com> (compile test)
Joel Stanley <joel@jms.id.au> (on ASPEED ast2500(ARMv5))
v2 - v1
- Fixed some compiling error which happens on changing kernel compression
mode to lzma/xz/lzo/lz4.
---Reported by: Florian Fainelli <f.fainelli@gmail.com>,
Russell King - ARM Linux <linux@armlinux.org.uk>
- Fixed a compiling error cause by some older arm instruction set(armv4t)
don't suppory movw/movt which is reported by kbuild.
- Changed the pte flag from _L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN to
pgprot_val(PAGE_KERNEL).
---Reported by: Russell King - ARM Linux <linux@armlinux.org.uk>
- Moved Enable KASan patch as the last one.
---Reported by: Florian Fainelli <f.fainelli@gmail.com>,
Russell King - ARM Linux <linux@armlinux.org.uk>
- Moved the definitions of cp15 registers from
arch/arm/include/asm/kvm_hyp.h to arch/arm/include/asm/cp15.h.
---Asked by: Mark Rutland <mark.rutland@arm.com>
- Merge the following commits into the commit
Define the virtual space of KASan's shadow region:
1) Define the virtual space of KASan's shadow region;
2) Avoid cleaning the KASan shadow area's mapping table;
3) Add KASan layout;
- Merge the following commits into the commit
Initialize the mapping of KASan shadow memory:
1) Initialize the mapping of KASan shadow memory;
2) Add support arm LPAE;
3) Don't need to map the shadow of KASan's shadow memory;
---Reported by: Russell King - ARM Linux <linux@armlinux.org.uk>
4) Change mapping of kasan_zero_page int readonly.
- The version 1 is tested by Florian Fainelli <f.fainelli@gmail.com>
on a Cortex-A5 (no LPAE).
Hi,all:
These patches add arch specific code for kernel address sanitizer
(see Documentation/kasan.txt).
1/8 of kernel addresses reserved for shadow memory. There was no
big enough hole for this, so virtual addresses for shadow were
stolen from user space.
At early boot stage the whole shadow region populated with just
one physical page (kasan_zero_page). Later, this page reused
as readonly zero shadow for some memory that KASan currently
don't track (vmalloc).
After mapping the physical memory, pages for shadow memory are
allocated and mapped.
KASan's stack instrumentation significantly increases stack's
consumption, so CONFIG_KASAN doubles THREAD_SIZE.
Functions like memset/memmove/memcpy do a lot of memory accesses.
If bad pointer passed to one of these function it is important
to catch this. Compiler's instrumentation cannot do this since
these functions are written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.
Some files built without kasan instrumentation (e.g. mm/slub.c).
Original mem* function replaced (via #define) with prefixed variants
to disable memory access checks for such files.
On arm LPAE architecture, the mapping table of KASan shadow memory(if
PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
because kasan instrumentation maybe cause do_translation_fault function
accessing KASan shadow memory. The accessing of KASan shadow memory in
do_translation_fault function maybe cause dead circle. So the mapping table
of KASan shadow memory need be copyed in pgd_alloc function.
Most of the code comes from:
https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe
These patches are tested on vexpress-ca15, vexpress-ca9
Abbott Liu (2):
ARM: Add TTBR operator for kasan_init
ARM: Define the virtual space of KASan's shadow region
Andrey Ryabinin (4):
ARM: Disable instrumentation for some code
ARM: Replace memory function for kasan
ARM: Initialize the mapping of KASan shadow memory
ARM: Enable KASan for ARM
Florian Fainelli (1):
ARM: Moved CP15 definitions from kvm_hyp.h to cp15.h
Documentation/dev-tools/kasan.rst | 4 +-
arch/arm/Kconfig | 9 +
arch/arm/boot/compressed/Makefile | 2 +
arch/arm/include/asm/cp15.h | 107 +++++++++
arch/arm/include/asm/kasan.h | 35 +++
arch/arm/include/asm/kasan_def.h | 63 ++++++
arch/arm/include/asm/kvm_hyp.h | 54 -----
arch/arm/include/asm/memory.h | 5 +
arch/arm/include/asm/pgalloc.h | 9 +-
arch/arm/include/asm/string.h | 17 ++
arch/arm/include/asm/thread_info.h | 4 +
arch/arm/kernel/entry-armv.S | 5 +-
arch/arm/kernel/entry-common.S | 9 +-
arch/arm/kernel/head-common.S | 7 +-
arch/arm/kernel/setup.c | 2 +
arch/arm/kernel/unwind.c | 6 +-
arch/arm/kvm/hyp/cp15-sr.c | 12 +-
arch/arm/kvm/hyp/switch.c | 6 +-
arch/arm/lib/memcpy.S | 3 +
arch/arm/lib/memmove.S | 5 +-
arch/arm/lib/memset.S | 3 +
arch/arm/mm/Makefile | 4 +
arch/arm/mm/kasan_init.c | 302 ++++++++++++++++++++++++++
arch/arm/mm/mmu.c | 7 +-
arch/arm/mm/pgd.c | 14 ++
arch/arm/vdso/Makefile | 2 +
drivers/firmware/efi/libstub/Makefile | 3 +-
27 files changed, 621 insertions(+), 78 deletions(-)
create mode 100644 arch/arm/include/asm/kasan.h
create mode 100644 arch/arm/include/asm/kasan_def.h
create mode 100644 arch/arm/mm/kasan_init.c
--
2.17.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v7 0/7] KASan for arm
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: alexandre.belloni, mhocko, catalin.marinas, linux-kernel,
dhowells, yamada.masahiro, ryabinin.a.a, glider, kvmarm,
Florian Fainelli, corbet, liuwenliang, daniel.lezcano, linux,
kasan-dev, bcm-kernel-feedback-list, geert, keescook, arnd,
marc.zyngier, andre.przywara, philip, jinb.park7, tglx, dvyukov,
nico, gregkh, ard.biesheuvel, linux-doc, rob, pombredanne, akpm,
thgarnie, kirill.shutemov
Hi all,
Abbott submitted a v5 about a year ago here:
and the series was not picked up since then, so I rebased it against
v5.2-rc4 and re-tested it on a Brahma-B53 (ARMv8 running AArch32 mode)
and Brahma-B15, both LPAE and test-kasan is consistent with the ARM64
counter part.
We were in a fairly good shape last time with a few different people
having tested it, so I am hoping we can get that included for 5.4 if
everything goes well.
Changelog:
v7 - v7
- add Linus' Tested-by for the following platforms:
Tested systems:
QEMU ARM RealView PBA8
QEMU ARM RealView PBX A9
QEMU ARM Versatile AB
Hardware Integrator CP
Hardware Versatile AB with IB2
- define CONFIG_KASAN_SHADOW_OFFSET
v6 - v5
- Resolve conflicts during rebase, and updated to make use of
kasan_early_shadow_pte instead of kasan_zero_pte
v5 - v4
- Modify Andrey Ryabinin's email address.
v4 - v3
- Remove the fix of type conversion in kasan_cache_create because it has
been fix in the latest version in:
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
- Change some Reviewed-by tag into Reported-by tag to avoid misleading.
---Reported by: Marc Zyngier <marc.zyngier@arm.com>
Russell King - ARM Linux <linux@armlinux.org.uk>
- Disable instrumentation for arch/arm/mm/physaddr.c
v3 - v2
- Remove this patch: 2 1-byte checks more safer for memory_is_poisoned_16
because a unaligned load/store of 16 bytes is rare on arm, and this
patch is very likely to affect the performance of modern CPUs.
---Acked by: Russell King - ARM Linux <linux@armlinux.org.uk>
- Fixed some link error which kasan_pmd_populate,kasan_pte_populate and
kasan_pud_populate are in section .meminit.text but the function
kasan_alloc_block which is called by kasan_pmd_populate,
kasan_pte_populate and kasan_pud_populate is in section .init.text. So
we need change kasan_pmd_populate,kasan_pte_populate and
kasan_pud_populate into the section .init.text.
---Reported by: Florian Fainelli <f.fainelli@gmail.com>
- Fixed some compile error which caused by the wrong access instruction in
arch/arm/kernel/entry-common.S.
---Reported by: kbuild test robot <lkp@intel.com>
- Disable instrumentation for arch/arm/kvm/hyp/*.
---Acked by: Marc Zyngier <marc.zyngier@arm.com>
- Update the set of supported architectures in
Documentation/dev-tools/kasan.rst.
---Acked by:Dmitry Vyukov <dvyukov@google.com>
- The version 2 is tested by:
Florian Fainelli <f.fainelli@gmail.com> (compile test)
kbuild test robot <lkp@intel.com> (compile test)
Joel Stanley <joel@jms.id.au> (on ASPEED ast2500(ARMv5))
v2 - v1
- Fixed some compiling error which happens on changing kernel compression
mode to lzma/xz/lzo/lz4.
---Reported by: Florian Fainelli <f.fainelli@gmail.com>,
Russell King - ARM Linux <linux@armlinux.org.uk>
- Fixed a compiling error cause by some older arm instruction set(armv4t)
don't suppory movw/movt which is reported by kbuild.
- Changed the pte flag from _L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN to
pgprot_val(PAGE_KERNEL).
---Reported by: Russell King - ARM Linux <linux@armlinux.org.uk>
- Moved Enable KASan patch as the last one.
---Reported by: Florian Fainelli <f.fainelli@gmail.com>,
Russell King - ARM Linux <linux@armlinux.org.uk>
- Moved the definitions of cp15 registers from
arch/arm/include/asm/kvm_hyp.h to arch/arm/include/asm/cp15.h.
---Asked by: Mark Rutland <mark.rutland@arm.com>
- Merge the following commits into the commit
Define the virtual space of KASan's shadow region:
1) Define the virtual space of KASan's shadow region;
2) Avoid cleaning the KASan shadow area's mapping table;
3) Add KASan layout;
- Merge the following commits into the commit
Initialize the mapping of KASan shadow memory:
1) Initialize the mapping of KASan shadow memory;
2) Add support arm LPAE;
3) Don't need to map the shadow of KASan's shadow memory;
---Reported by: Russell King - ARM Linux <linux@armlinux.org.uk>
4) Change mapping of kasan_zero_page int readonly.
- The version 1 is tested by Florian Fainelli <f.fainelli@gmail.com>
on a Cortex-A5 (no LPAE).
Hi,all:
These patches add arch specific code for kernel address sanitizer
(see Documentation/kasan.txt).
1/8 of kernel addresses reserved for shadow memory. There was no
big enough hole for this, so virtual addresses for shadow were
stolen from user space.
At early boot stage the whole shadow region populated with just
one physical page (kasan_zero_page). Later, this page reused
as readonly zero shadow for some memory that KASan currently
don't track (vmalloc).
After mapping the physical memory, pages for shadow memory are
allocated and mapped.
KASan's stack instrumentation significantly increases stack's
consumption, so CONFIG_KASAN doubles THREAD_SIZE.
Functions like memset/memmove/memcpy do a lot of memory accesses.
If bad pointer passed to one of these function it is important
to catch this. Compiler's instrumentation cannot do this since
these functions are written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.
Some files built without kasan instrumentation (e.g. mm/slub.c).
Original mem* function replaced (via #define) with prefixed variants
to disable memory access checks for such files.
On arm LPAE architecture, the mapping table of KASan shadow memory(if
PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
because kasan instrumentation maybe cause do_translation_fault function
accessing KASan shadow memory. The accessing of KASan shadow memory in
do_translation_fault function maybe cause dead circle. So the mapping table
of KASan shadow memory need be copyed in pgd_alloc function.
Most of the code comes from:
https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe
These patches are tested on vexpress-ca15, vexpress-ca9
Abbott Liu (2):
ARM: Add TTBR operator for kasan_init
ARM: Define the virtual space of KASan's shadow region
Andrey Ryabinin (4):
ARM: Disable instrumentation for some code
ARM: Replace memory function for kasan
ARM: Initialize the mapping of KASan shadow memory
ARM: Enable KASan for ARM
Florian Fainelli (1):
ARM: Moved CP15 definitions from kvm_hyp.h to cp15.h
Documentation/dev-tools/kasan.rst | 4 +-
arch/arm/Kconfig | 9 +
arch/arm/boot/compressed/Makefile | 2 +
arch/arm/include/asm/cp15.h | 107 +++++++++
arch/arm/include/asm/kasan.h | 35 +++
arch/arm/include/asm/kasan_def.h | 63 ++++++
arch/arm/include/asm/kvm_hyp.h | 54 -----
arch/arm/include/asm/memory.h | 5 +
arch/arm/include/asm/pgalloc.h | 9 +-
arch/arm/include/asm/string.h | 17 ++
arch/arm/include/asm/thread_info.h | 4 +
arch/arm/kernel/entry-armv.S | 5 +-
arch/arm/kernel/entry-common.S | 9 +-
arch/arm/kernel/head-common.S | 7 +-
arch/arm/kernel/setup.c | 2 +
arch/arm/kernel/unwind.c | 6 +-
arch/arm/kvm/hyp/cp15-sr.c | 12 +-
arch/arm/kvm/hyp/switch.c | 6 +-
arch/arm/lib/memcpy.S | 3 +
arch/arm/lib/memmove.S | 5 +-
arch/arm/lib/memset.S | 3 +
arch/arm/mm/Makefile | 4 +
arch/arm/mm/kasan_init.c | 302 ++++++++++++++++++++++++++
arch/arm/mm/mmu.c | 7 +-
arch/arm/mm/pgd.c | 14 ++
arch/arm/vdso/Makefile | 2 +
drivers/firmware/efi/libstub/Makefile | 3 +-
27 files changed, 621 insertions(+), 78 deletions(-)
create mode 100644 arch/arm/include/asm/kasan.h
create mode 100644 arch/arm/include/asm/kasan_def.h
create mode 100644 arch/arm/mm/kasan_init.c
--
2.17.1
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v7 0/7] KASan for arm
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: mark.rutland, alexandre.belloni, mhocko, julien.thierry,
catalin.marinas, linux-kernel, dhowells, yamada.masahiro,
ryabinin.a.a, glider, kvmarm, Florian Fainelli, corbet,
liuwenliang, daniel.lezcano, linux, kasan-dev,
bcm-kernel-feedback-list, geert, drjones, vladimir.murzin,
keescook, arnd, marc.zyngier, andre.przywara, philip, jinb.park7,
tglx, dvyukov, nico, gregkh, ard.biesheuvel, linux-doc,
christoffer.dall, rob, pombredanne, akpm, thgarnie,
kirill.shutemov
Hi all,
Abbott submitted a v5 about a year ago here:
and the series was not picked up since then, so I rebased it against
v5.2-rc4 and re-tested it on a Brahma-B53 (ARMv8 running AArch32 mode)
and Brahma-B15, both LPAE and test-kasan is consistent with the ARM64
counter part.
We were in a fairly good shape last time with a few different people
having tested it, so I am hoping we can get that included for 5.4 if
everything goes well.
Changelog:
v7 - v7
- add Linus' Tested-by for the following platforms:
Tested systems:
QEMU ARM RealView PBA8
QEMU ARM RealView PBX A9
QEMU ARM Versatile AB
Hardware Integrator CP
Hardware Versatile AB with IB2
- define CONFIG_KASAN_SHADOW_OFFSET
v6 - v5
- Resolve conflicts during rebase, and updated to make use of
kasan_early_shadow_pte instead of kasan_zero_pte
v5 - v4
- Modify Andrey Ryabinin's email address.
v4 - v3
- Remove the fix of type conversion in kasan_cache_create because it has
been fix in the latest version in:
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
- Change some Reviewed-by tag into Reported-by tag to avoid misleading.
---Reported by: Marc Zyngier <marc.zyngier@arm.com>
Russell King - ARM Linux <linux@armlinux.org.uk>
- Disable instrumentation for arch/arm/mm/physaddr.c
v3 - v2
- Remove this patch: 2 1-byte checks more safer for memory_is_poisoned_16
because a unaligned load/store of 16 bytes is rare on arm, and this
patch is very likely to affect the performance of modern CPUs.
---Acked by: Russell King - ARM Linux <linux@armlinux.org.uk>
- Fixed some link error which kasan_pmd_populate,kasan_pte_populate and
kasan_pud_populate are in section .meminit.text but the function
kasan_alloc_block which is called by kasan_pmd_populate,
kasan_pte_populate and kasan_pud_populate is in section .init.text. So
we need change kasan_pmd_populate,kasan_pte_populate and
kasan_pud_populate into the section .init.text.
---Reported by: Florian Fainelli <f.fainelli@gmail.com>
- Fixed some compile error which caused by the wrong access instruction in
arch/arm/kernel/entry-common.S.
---Reported by: kbuild test robot <lkp@intel.com>
- Disable instrumentation for arch/arm/kvm/hyp/*.
---Acked by: Marc Zyngier <marc.zyngier@arm.com>
- Update the set of supported architectures in
Documentation/dev-tools/kasan.rst.
---Acked by:Dmitry Vyukov <dvyukov@google.com>
- The version 2 is tested by:
Florian Fainelli <f.fainelli@gmail.com> (compile test)
kbuild test robot <lkp@intel.com> (compile test)
Joel Stanley <joel@jms.id.au> (on ASPEED ast2500(ARMv5))
v2 - v1
- Fixed some compiling error which happens on changing kernel compression
mode to lzma/xz/lzo/lz4.
---Reported by: Florian Fainelli <f.fainelli@gmail.com>,
Russell King - ARM Linux <linux@armlinux.org.uk>
- Fixed a compiling error cause by some older arm instruction set(armv4t)
don't suppory movw/movt which is reported by kbuild.
- Changed the pte flag from _L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN to
pgprot_val(PAGE_KERNEL).
---Reported by: Russell King - ARM Linux <linux@armlinux.org.uk>
- Moved Enable KASan patch as the last one.
---Reported by: Florian Fainelli <f.fainelli@gmail.com>,
Russell King - ARM Linux <linux@armlinux.org.uk>
- Moved the definitions of cp15 registers from
arch/arm/include/asm/kvm_hyp.h to arch/arm/include/asm/cp15.h.
---Asked by: Mark Rutland <mark.rutland@arm.com>
- Merge the following commits into the commit
Define the virtual space of KASan's shadow region:
1) Define the virtual space of KASan's shadow region;
2) Avoid cleaning the KASan shadow area's mapping table;
3) Add KASan layout;
- Merge the following commits into the commit
Initialize the mapping of KASan shadow memory:
1) Initialize the mapping of KASan shadow memory;
2) Add support arm LPAE;
3) Don't need to map the shadow of KASan's shadow memory;
---Reported by: Russell King - ARM Linux <linux@armlinux.org.uk>
4) Change mapping of kasan_zero_page int readonly.
- The version 1 is tested by Florian Fainelli <f.fainelli@gmail.com>
on a Cortex-A5 (no LPAE).
Hi,all:
These patches add arch specific code for kernel address sanitizer
(see Documentation/kasan.txt).
1/8 of kernel addresses reserved for shadow memory. There was no
big enough hole for this, so virtual addresses for shadow were
stolen from user space.
At early boot stage the whole shadow region populated with just
one physical page (kasan_zero_page). Later, this page reused
as readonly zero shadow for some memory that KASan currently
don't track (vmalloc).
After mapping the physical memory, pages for shadow memory are
allocated and mapped.
KASan's stack instrumentation significantly increases stack's
consumption, so CONFIG_KASAN doubles THREAD_SIZE.
Functions like memset/memmove/memcpy do a lot of memory accesses.
If bad pointer passed to one of these function it is important
to catch this. Compiler's instrumentation cannot do this since
these functions are written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.
Some files built without kasan instrumentation (e.g. mm/slub.c).
Original mem* function replaced (via #define) with prefixed variants
to disable memory access checks for such files.
On arm LPAE architecture, the mapping table of KASan shadow memory(if
PAGE_OFFSET is 0xc0000000, the KASan shadow memory's virtual space is
0xb6e000000~0xbf000000) can't be filled in do_translation_fault function,
because kasan instrumentation maybe cause do_translation_fault function
accessing KASan shadow memory. The accessing of KASan shadow memory in
do_translation_fault function maybe cause dead circle. So the mapping table
of KASan shadow memory need be copyed in pgd_alloc function.
Most of the code comes from:
https://github.com/aryabinin/linux/commit/0b54f17e70ff50a902c4af05bb92716eb95acefe
These patches are tested on vexpress-ca15, vexpress-ca9
Abbott Liu (2):
ARM: Add TTBR operator for kasan_init
ARM: Define the virtual space of KASan's shadow region
Andrey Ryabinin (4):
ARM: Disable instrumentation for some code
ARM: Replace memory function for kasan
ARM: Initialize the mapping of KASan shadow memory
ARM: Enable KASan for ARM
Florian Fainelli (1):
ARM: Moved CP15 definitions from kvm_hyp.h to cp15.h
Documentation/dev-tools/kasan.rst | 4 +-
arch/arm/Kconfig | 9 +
arch/arm/boot/compressed/Makefile | 2 +
arch/arm/include/asm/cp15.h | 107 +++++++++
arch/arm/include/asm/kasan.h | 35 +++
arch/arm/include/asm/kasan_def.h | 63 ++++++
arch/arm/include/asm/kvm_hyp.h | 54 -----
arch/arm/include/asm/memory.h | 5 +
arch/arm/include/asm/pgalloc.h | 9 +-
arch/arm/include/asm/string.h | 17 ++
arch/arm/include/asm/thread_info.h | 4 +
arch/arm/kernel/entry-armv.S | 5 +-
arch/arm/kernel/entry-common.S | 9 +-
arch/arm/kernel/head-common.S | 7 +-
arch/arm/kernel/setup.c | 2 +
arch/arm/kernel/unwind.c | 6 +-
arch/arm/kvm/hyp/cp15-sr.c | 12 +-
arch/arm/kvm/hyp/switch.c | 6 +-
arch/arm/lib/memcpy.S | 3 +
arch/arm/lib/memmove.S | 5 +-
arch/arm/lib/memset.S | 3 +
arch/arm/mm/Makefile | 4 +
arch/arm/mm/kasan_init.c | 302 ++++++++++++++++++++++++++
arch/arm/mm/mmu.c | 7 +-
arch/arm/mm/pgd.c | 14 ++
arch/arm/vdso/Makefile | 2 +
drivers/firmware/efi/libstub/Makefile | 3 +-
27 files changed, 621 insertions(+), 78 deletions(-)
create mode 100644 arch/arm/include/asm/kasan.h
create mode 100644 arch/arm/include/asm/kasan_def.h
create mode 100644 arch/arm/mm/kasan_init.c
--
2.17.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v7 1/7] ARM: Moved CP15 definitions from kvm_hyp.h to cp15.h
2020-01-17 22:48 ` Florian Fainelli
(?)
@ 2020-01-17 22:48 ` Florian Fainelli
-1 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Florian Fainelli, bcm-kernel-feedback-list, glider, dvyukov,
corbet, linux, christoffer.dall, marc.zyngier, arnd, nico,
vladimir.murzin, keescook, jinb.park7, alexandre.belloni,
ard.biesheuvel, daniel.lezcano, pombredanne, liuwenliang, rob,
gregkh, akpm, mark.rutland, catalin.marinas, yamada.masahiro,
tglx, thgarnie, dhowells, geert, andre.przywara, julien.thierry,
drjones, philip, mhocko, kirill.shutemov, kasan-dev, linux-doc,
linux-kernel, kvmarm, ryabinin.a.a
We are going to add specific accessor functions for TTBR which are
32-bit/64-bit appropriate, move all CP15 register definitions into
cp15.h where they belong.
Suggested-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/include/asm/cp15.h | 57 ++++++++++++++++++++++++++++++++++
arch/arm/include/asm/kvm_hyp.h | 54 --------------------------------
2 files changed, 57 insertions(+), 54 deletions(-)
diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index d2453e2d3f1f..89b6663f2863 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -70,6 +70,63 @@
#define CNTVCT __ACCESS_CP15_64(1, c14)
+#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
+#define TTBR0_64 __ACCESS_CP15_64(0, c2)
+#define TTBR1_64 __ACCESS_CP15_64(1, c2)
+#define PAR_64 __ACCESS_CP15_64(0, c7)
+#define VTTBR __ACCESS_CP15_64(6, c2)
+#define CNTP_CVAL __ACCESS_CP15_64(2, c14)
+#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
+#define CNTVOFF __ACCESS_CP15_64(4, c14)
+
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
+#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define HCR __ACCESS_CP15(c1, 4, c1, 0)
+#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
+#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
+#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
+#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define HSR __ACCESS_CP15(c5, 4, c2, 0)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
+#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
+#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define BPIALLIS __ACCESS_CP15(c7, 0, c1, 6)
+#define ICIMVAU __ACCESS_CP15(c7, 0, c5, 1)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTP_CTL __ACCESS_CP15(c14, 0, c2, 1)
+#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */
static inline unsigned long get_cr(void)
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 40e9034db601..f6635bd63ff0 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -25,60 +25,6 @@
__val; \
})
-#define TTBR0 __ACCESS_CP15_64(0, c2)
-#define TTBR1 __ACCESS_CP15_64(1, c2)
-#define VTTBR __ACCESS_CP15_64(6, c2)
-#define PAR __ACCESS_CP15_64(0, c7)
-#define CNTP_CVAL __ACCESS_CP15_64(2, c14)
-#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
-#define CNTVOFF __ACCESS_CP15_64(4, c14)
-
-#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
-#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
-#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
-#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
-#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
-#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
-#define HCR __ACCESS_CP15(c1, 4, c1, 0)
-#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
-#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
-#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
-#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
-#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
-#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
-#define DACR __ACCESS_CP15(c3, 0, c0, 0)
-#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
-#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
-#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
-#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
-#define HSR __ACCESS_CP15(c5, 4, c2, 0)
-#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
-#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
-#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
-#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
-#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
-#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
-#define BPIALLIS __ACCESS_CP15(c7, 0, c1, 6)
-#define ICIMVAU __ACCESS_CP15(c7, 0, c5, 1)
-#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
-#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
-#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
-#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
-#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
-#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
-#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
-#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
-#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
-#define CID __ACCESS_CP15(c13, 0, c0, 1)
-#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
-#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
-#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
-#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
-#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
-#define CNTP_CTL __ACCESS_CP15(c14, 0, c2, 1)
-#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
-#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
-
#define VFP_FPEXC __ACCESS_VFP(FPEXC)
/* AArch64 compatibility macros, only for the timer so far */
--
2.17.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 1/7] ARM: Moved CP15 definitions from kvm_hyp.h to cp15.h
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: alexandre.belloni, mhocko, catalin.marinas, linux-kernel,
dhowells, yamada.masahiro, ryabinin.a.a, glider, kvmarm,
Florian Fainelli, corbet, liuwenliang, daniel.lezcano, linux,
kasan-dev, bcm-kernel-feedback-list, geert, keescook, arnd,
marc.zyngier, andre.przywara, philip, jinb.park7, tglx, dvyukov,
nico, gregkh, ard.biesheuvel, linux-doc, rob, pombredanne, akpm,
thgarnie, kirill.shutemov
We are going to add specific accessor functions for TTBR which are
32-bit/64-bit appropriate, move all CP15 register definitions into
cp15.h where they belong.
Suggested-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/include/asm/cp15.h | 57 ++++++++++++++++++++++++++++++++++
arch/arm/include/asm/kvm_hyp.h | 54 --------------------------------
2 files changed, 57 insertions(+), 54 deletions(-)
diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index d2453e2d3f1f..89b6663f2863 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -70,6 +70,63 @@
#define CNTVCT __ACCESS_CP15_64(1, c14)
+#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
+#define TTBR0_64 __ACCESS_CP15_64(0, c2)
+#define TTBR1_64 __ACCESS_CP15_64(1, c2)
+#define PAR_64 __ACCESS_CP15_64(0, c7)
+#define VTTBR __ACCESS_CP15_64(6, c2)
+#define CNTP_CVAL __ACCESS_CP15_64(2, c14)
+#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
+#define CNTVOFF __ACCESS_CP15_64(4, c14)
+
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
+#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define HCR __ACCESS_CP15(c1, 4, c1, 0)
+#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
+#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
+#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
+#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define HSR __ACCESS_CP15(c5, 4, c2, 0)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
+#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
+#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define BPIALLIS __ACCESS_CP15(c7, 0, c1, 6)
+#define ICIMVAU __ACCESS_CP15(c7, 0, c5, 1)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTP_CTL __ACCESS_CP15(c14, 0, c2, 1)
+#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */
static inline unsigned long get_cr(void)
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 40e9034db601..f6635bd63ff0 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -25,60 +25,6 @@
__val; \
})
-#define TTBR0 __ACCESS_CP15_64(0, c2)
-#define TTBR1 __ACCESS_CP15_64(1, c2)
-#define VTTBR __ACCESS_CP15_64(6, c2)
-#define PAR __ACCESS_CP15_64(0, c7)
-#define CNTP_CVAL __ACCESS_CP15_64(2, c14)
-#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
-#define CNTVOFF __ACCESS_CP15_64(4, c14)
-
-#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
-#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
-#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
-#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
-#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
-#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
-#define HCR __ACCESS_CP15(c1, 4, c1, 0)
-#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
-#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
-#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
-#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
-#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
-#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
-#define DACR __ACCESS_CP15(c3, 0, c0, 0)
-#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
-#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
-#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
-#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
-#define HSR __ACCESS_CP15(c5, 4, c2, 0)
-#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
-#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
-#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
-#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
-#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
-#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
-#define BPIALLIS __ACCESS_CP15(c7, 0, c1, 6)
-#define ICIMVAU __ACCESS_CP15(c7, 0, c5, 1)
-#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
-#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
-#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
-#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
-#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
-#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
-#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
-#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
-#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
-#define CID __ACCESS_CP15(c13, 0, c0, 1)
-#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
-#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
-#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
-#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
-#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
-#define CNTP_CTL __ACCESS_CP15(c14, 0, c2, 1)
-#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
-#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
-
#define VFP_FPEXC __ACCESS_VFP(FPEXC)
/* AArch64 compatibility macros, only for the timer so far */
--
2.17.1
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 1/7] ARM: Moved CP15 definitions from kvm_hyp.h to cp15.h
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: mark.rutland, alexandre.belloni, mhocko, julien.thierry,
catalin.marinas, linux-kernel, dhowells, yamada.masahiro,
ryabinin.a.a, glider, kvmarm, Florian Fainelli, corbet,
liuwenliang, daniel.lezcano, linux, kasan-dev,
bcm-kernel-feedback-list, geert, drjones, vladimir.murzin,
keescook, arnd, marc.zyngier, andre.przywara, philip, jinb.park7,
tglx, dvyukov, nico, gregkh, ard.biesheuvel, linux-doc,
christoffer.dall, rob, pombredanne, akpm, thgarnie,
kirill.shutemov
We are going to add specific accessor functions for TTBR which are
32-bit/64-bit appropriate, move all CP15 register definitions into
cp15.h where they belong.
Suggested-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/include/asm/cp15.h | 57 ++++++++++++++++++++++++++++++++++
arch/arm/include/asm/kvm_hyp.h | 54 --------------------------------
2 files changed, 57 insertions(+), 54 deletions(-)
diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index d2453e2d3f1f..89b6663f2863 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -70,6 +70,63 @@
#define CNTVCT __ACCESS_CP15_64(1, c14)
+#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0)
+#define TTBR1_32 __ACCESS_CP15(c2, 0, c0, 1)
+#define PAR_32 __ACCESS_CP15(c7, 0, c4, 0)
+#define TTBR0_64 __ACCESS_CP15_64(0, c2)
+#define TTBR1_64 __ACCESS_CP15_64(1, c2)
+#define PAR_64 __ACCESS_CP15_64(0, c7)
+#define VTTBR __ACCESS_CP15_64(6, c2)
+#define CNTP_CVAL __ACCESS_CP15_64(2, c14)
+#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
+#define CNTVOFF __ACCESS_CP15_64(4, c14)
+
+#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
+#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
+#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
+#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
+#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
+#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
+#define HCR __ACCESS_CP15(c1, 4, c1, 0)
+#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
+#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
+#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
+#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
+#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
+#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
+#define DACR __ACCESS_CP15(c3, 0, c0, 0)
+#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
+#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
+#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
+#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
+#define HSR __ACCESS_CP15(c5, 4, c2, 0)
+#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
+#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
+#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
+#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
+#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
+#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
+#define BPIALLIS __ACCESS_CP15(c7, 0, c1, 6)
+#define ICIMVAU __ACCESS_CP15(c7, 0, c5, 1)
+#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
+#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
+#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
+#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
+#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
+#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
+#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
+#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
+#define CID __ACCESS_CP15(c13, 0, c0, 1)
+#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
+#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
+#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
+#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
+#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
+#define CNTP_CTL __ACCESS_CP15(c14, 0, c2, 1)
+#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
+#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */
static inline unsigned long get_cr(void)
diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
index 40e9034db601..f6635bd63ff0 100644
--- a/arch/arm/include/asm/kvm_hyp.h
+++ b/arch/arm/include/asm/kvm_hyp.h
@@ -25,60 +25,6 @@
__val; \
})
-#define TTBR0 __ACCESS_CP15_64(0, c2)
-#define TTBR1 __ACCESS_CP15_64(1, c2)
-#define VTTBR __ACCESS_CP15_64(6, c2)
-#define PAR __ACCESS_CP15_64(0, c7)
-#define CNTP_CVAL __ACCESS_CP15_64(2, c14)
-#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
-#define CNTVOFF __ACCESS_CP15_64(4, c14)
-
-#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
-#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
-#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
-#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
-#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
-#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
-#define HCR __ACCESS_CP15(c1, 4, c1, 0)
-#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
-#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
-#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
-#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
-#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
-#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
-#define DACR __ACCESS_CP15(c3, 0, c0, 0)
-#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
-#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
-#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
-#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
-#define HSR __ACCESS_CP15(c5, 4, c2, 0)
-#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
-#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
-#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
-#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
-#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
-#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
-#define BPIALLIS __ACCESS_CP15(c7, 0, c1, 6)
-#define ICIMVAU __ACCESS_CP15(c7, 0, c5, 1)
-#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
-#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
-#define TLBIALL __ACCESS_CP15(c8, 0, c7, 0)
-#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
-#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
-#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
-#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
-#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
-#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
-#define CID __ACCESS_CP15(c13, 0, c0, 1)
-#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
-#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
-#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
-#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
-#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
-#define CNTP_CTL __ACCESS_CP15(c14, 0, c2, 1)
-#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
-#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
-
#define VFP_FPEXC __ACCESS_VFP(FPEXC)
/* AArch64 compatibility macros, only for the timer so far */
--
2.17.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 2/7] ARM: Add TTBR operator for kasan_init
2020-01-17 22:48 ` Florian Fainelli
(?)
@ 2020-01-17 22:48 ` Florian Fainelli
-1 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Abbott Liu, Andrey Ryabinin, Florian Fainelli,
bcm-kernel-feedback-list, glider, dvyukov, corbet, linux,
christoffer.dall, marc.zyngier, arnd, nico, vladimir.murzin,
keescook, jinb.park7, alexandre.belloni, ard.biesheuvel,
daniel.lezcano, pombredanne, rob, gregkh, akpm, mark.rutland,
catalin.marinas, yamada.masahiro, tglx, thgarnie, dhowells,
geert, andre.przywara, julien.thierry, drjones, philip, mhocko,
kirill.shutemov, kasan-dev, linux-doc, linux-kernel, kvmarm,
ryabinin.a.a
From: Abbott Liu <liuwenliang@huawei.com>
The purpose of this patch is to provide set_ttbr0/get_ttbr0 to
kasan_init function. This makes use of the CP15 definitions added in the
previous patch.
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Reported-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/include/asm/cp15.h | 50 +++++++++++++++++++++++++++++++++++++
arch/arm/kvm/hyp/cp15-sr.c | 12 ++++-----
arch/arm/kvm/hyp/switch.c | 6 ++---
3 files changed, 59 insertions(+), 9 deletions(-)
diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index 89b6663f2863..0bd8287b39fa 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -42,6 +42,8 @@
#ifndef __ASSEMBLY__
+#include <linux/stringify.h>
+
#if __LINUX_ARM_ARCH__ >= 4
#define vectors_high() (get_cr() & CR_V)
#else
@@ -129,6 +131,54 @@
extern unsigned long cr_alignment; /* defined in entry-armv.S */
+static inline void set_par(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, PAR_64);
+ else
+ write_sysreg(val, PAR_32);
+}
+
+static inline u64 get_par(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(PAR_64);
+ else
+ return read_sysreg(PAR_32);
+}
+
+static inline void set_ttbr0(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR0_64);
+ else
+ write_sysreg(val, TTBR0_32);
+}
+
+static inline u64 get_ttbr0(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR0_64);
+ else
+ return read_sysreg(TTBR0_32);
+}
+
+static inline void set_ttbr1(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR1_64);
+ else
+ write_sysreg(val, TTBR1_32);
+}
+
+static inline u64 get_ttbr1(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR1_64);
+ else
+ return read_sysreg(TTBR1_32);
+}
+
static inline unsigned long get_cr(void)
{
unsigned long val;
diff --git a/arch/arm/kvm/hyp/cp15-sr.c b/arch/arm/kvm/hyp/cp15-sr.c
index e6923306f698..b2b9bb0a08b8 100644
--- a/arch/arm/kvm/hyp/cp15-sr.c
+++ b/arch/arm/kvm/hyp/cp15-sr.c
@@ -19,8 +19,8 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
ctxt->cp15[c0_CSSELR] = read_sysreg(CSSELR);
ctxt->cp15[c1_SCTLR] = read_sysreg(SCTLR);
ctxt->cp15[c1_CPACR] = read_sysreg(CPACR);
- *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0);
- *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1);
+ *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0_64);
+ *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1_64);
ctxt->cp15[c2_TTBCR] = read_sysreg(TTBCR);
ctxt->cp15[c3_DACR] = read_sysreg(DACR);
ctxt->cp15[c5_DFSR] = read_sysreg(DFSR);
@@ -29,7 +29,7 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
ctxt->cp15[c5_AIFSR] = read_sysreg(AIFSR);
ctxt->cp15[c6_DFAR] = read_sysreg(DFAR);
ctxt->cp15[c6_IFAR] = read_sysreg(IFAR);
- *cp15_64(ctxt, c7_PAR) = read_sysreg(PAR);
+ *cp15_64(ctxt, c7_PAR) = read_sysreg(PAR_64);
ctxt->cp15[c10_PRRR] = read_sysreg(PRRR);
ctxt->cp15[c10_NMRR] = read_sysreg(NMRR);
ctxt->cp15[c10_AMAIR0] = read_sysreg(AMAIR0);
@@ -48,8 +48,8 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->cp15[c0_CSSELR], CSSELR);
write_sysreg(ctxt->cp15[c1_SCTLR], SCTLR);
write_sysreg(ctxt->cp15[c1_CPACR], CPACR);
- write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0);
- write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0_64);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1_64);
write_sysreg(ctxt->cp15[c2_TTBCR], TTBCR);
write_sysreg(ctxt->cp15[c3_DACR], DACR);
write_sysreg(ctxt->cp15[c5_DFSR], DFSR);
@@ -58,7 +58,7 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->cp15[c5_AIFSR], AIFSR);
write_sysreg(ctxt->cp15[c6_DFAR], DFAR);
write_sysreg(ctxt->cp15[c6_IFAR], IFAR);
- write_sysreg(*cp15_64(ctxt, c7_PAR), PAR);
+ write_sysreg(*cp15_64(ctxt, c7_PAR), PAR_64);
write_sysreg(ctxt->cp15[c10_PRRR], PRRR);
write_sysreg(ctxt->cp15[c10_NMRR], NMRR);
write_sysreg(ctxt->cp15[c10_AMAIR0], AMAIR0);
diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
index 1efeef3fd0ee..581277ef44d3 100644
--- a/arch/arm/kvm/hyp/switch.c
+++ b/arch/arm/kvm/hyp/switch.c
@@ -123,12 +123,12 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
if (!(hsr & HSR_DABT_S1PTW) && (hsr & HSR_FSC_TYPE) == FSC_PERM) {
u64 par, tmp;
- par = read_sysreg(PAR);
+ par = read_sysreg(PAR_64);
write_sysreg(far, ATS1CPR);
isb();
- tmp = read_sysreg(PAR);
- write_sysreg(par, PAR);
+ tmp = read_sysreg(PAR_64);
+ write_sysreg(par, PAR_64);
if (unlikely(tmp & 1))
return false; /* Translation failed, back to guest */
--
2.17.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 2/7] ARM: Add TTBR operator for kasan_init
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: alexandre.belloni, mhocko, catalin.marinas, linux-kernel,
dhowells, yamada.masahiro, ryabinin.a.a, glider, kvmarm,
Florian Fainelli, corbet, Abbott Liu, daniel.lezcano, linux,
kasan-dev, bcm-kernel-feedback-list, Andrey Ryabinin, keescook,
arnd, marc.zyngier, andre.przywara, philip, jinb.park7, tglx,
dvyukov, nico, gregkh, ard.biesheuvel, linux-doc, geert, rob,
pombredanne, akpm, thgarnie, kirill.shutemov
From: Abbott Liu <liuwenliang@huawei.com>
The purpose of this patch is to provide set_ttbr0/get_ttbr0 to
kasan_init function. This makes use of the CP15 definitions added in the
previous patch.
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Reported-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/include/asm/cp15.h | 50 +++++++++++++++++++++++++++++++++++++
arch/arm/kvm/hyp/cp15-sr.c | 12 ++++-----
arch/arm/kvm/hyp/switch.c | 6 ++---
3 files changed, 59 insertions(+), 9 deletions(-)
diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index 89b6663f2863..0bd8287b39fa 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -42,6 +42,8 @@
#ifndef __ASSEMBLY__
+#include <linux/stringify.h>
+
#if __LINUX_ARM_ARCH__ >= 4
#define vectors_high() (get_cr() & CR_V)
#else
@@ -129,6 +131,54 @@
extern unsigned long cr_alignment; /* defined in entry-armv.S */
+static inline void set_par(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, PAR_64);
+ else
+ write_sysreg(val, PAR_32);
+}
+
+static inline u64 get_par(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(PAR_64);
+ else
+ return read_sysreg(PAR_32);
+}
+
+static inline void set_ttbr0(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR0_64);
+ else
+ write_sysreg(val, TTBR0_32);
+}
+
+static inline u64 get_ttbr0(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR0_64);
+ else
+ return read_sysreg(TTBR0_32);
+}
+
+static inline void set_ttbr1(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR1_64);
+ else
+ write_sysreg(val, TTBR1_32);
+}
+
+static inline u64 get_ttbr1(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR1_64);
+ else
+ return read_sysreg(TTBR1_32);
+}
+
static inline unsigned long get_cr(void)
{
unsigned long val;
diff --git a/arch/arm/kvm/hyp/cp15-sr.c b/arch/arm/kvm/hyp/cp15-sr.c
index e6923306f698..b2b9bb0a08b8 100644
--- a/arch/arm/kvm/hyp/cp15-sr.c
+++ b/arch/arm/kvm/hyp/cp15-sr.c
@@ -19,8 +19,8 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
ctxt->cp15[c0_CSSELR] = read_sysreg(CSSELR);
ctxt->cp15[c1_SCTLR] = read_sysreg(SCTLR);
ctxt->cp15[c1_CPACR] = read_sysreg(CPACR);
- *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0);
- *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1);
+ *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0_64);
+ *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1_64);
ctxt->cp15[c2_TTBCR] = read_sysreg(TTBCR);
ctxt->cp15[c3_DACR] = read_sysreg(DACR);
ctxt->cp15[c5_DFSR] = read_sysreg(DFSR);
@@ -29,7 +29,7 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
ctxt->cp15[c5_AIFSR] = read_sysreg(AIFSR);
ctxt->cp15[c6_DFAR] = read_sysreg(DFAR);
ctxt->cp15[c6_IFAR] = read_sysreg(IFAR);
- *cp15_64(ctxt, c7_PAR) = read_sysreg(PAR);
+ *cp15_64(ctxt, c7_PAR) = read_sysreg(PAR_64);
ctxt->cp15[c10_PRRR] = read_sysreg(PRRR);
ctxt->cp15[c10_NMRR] = read_sysreg(NMRR);
ctxt->cp15[c10_AMAIR0] = read_sysreg(AMAIR0);
@@ -48,8 +48,8 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->cp15[c0_CSSELR], CSSELR);
write_sysreg(ctxt->cp15[c1_SCTLR], SCTLR);
write_sysreg(ctxt->cp15[c1_CPACR], CPACR);
- write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0);
- write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0_64);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1_64);
write_sysreg(ctxt->cp15[c2_TTBCR], TTBCR);
write_sysreg(ctxt->cp15[c3_DACR], DACR);
write_sysreg(ctxt->cp15[c5_DFSR], DFSR);
@@ -58,7 +58,7 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->cp15[c5_AIFSR], AIFSR);
write_sysreg(ctxt->cp15[c6_DFAR], DFAR);
write_sysreg(ctxt->cp15[c6_IFAR], IFAR);
- write_sysreg(*cp15_64(ctxt, c7_PAR), PAR);
+ write_sysreg(*cp15_64(ctxt, c7_PAR), PAR_64);
write_sysreg(ctxt->cp15[c10_PRRR], PRRR);
write_sysreg(ctxt->cp15[c10_NMRR], NMRR);
write_sysreg(ctxt->cp15[c10_AMAIR0], AMAIR0);
diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
index 1efeef3fd0ee..581277ef44d3 100644
--- a/arch/arm/kvm/hyp/switch.c
+++ b/arch/arm/kvm/hyp/switch.c
@@ -123,12 +123,12 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
if (!(hsr & HSR_DABT_S1PTW) && (hsr & HSR_FSC_TYPE) == FSC_PERM) {
u64 par, tmp;
- par = read_sysreg(PAR);
+ par = read_sysreg(PAR_64);
write_sysreg(far, ATS1CPR);
isb();
- tmp = read_sysreg(PAR);
- write_sysreg(par, PAR);
+ tmp = read_sysreg(PAR_64);
+ write_sysreg(par, PAR_64);
if (unlikely(tmp & 1))
return false; /* Translation failed, back to guest */
--
2.17.1
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 2/7] ARM: Add TTBR operator for kasan_init
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: mark.rutland, alexandre.belloni, mhocko, julien.thierry,
catalin.marinas, linux-kernel, dhowells, yamada.masahiro,
ryabinin.a.a, glider, kvmarm, Florian Fainelli, corbet,
Abbott Liu, daniel.lezcano, linux, kasan-dev,
bcm-kernel-feedback-list, Andrey Ryabinin, drjones,
vladimir.murzin, keescook, arnd, marc.zyngier, andre.przywara,
philip, jinb.park7, tglx, dvyukov, nico, gregkh, ard.biesheuvel,
linux-doc, christoffer.dall, geert, rob, pombredanne, akpm,
thgarnie, kirill.shutemov
From: Abbott Liu <liuwenliang@huawei.com>
The purpose of this patch is to provide set_ttbr0/get_ttbr0 to
kasan_init function. This makes use of the CP15 definitions added in the
previous patch.
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Reported-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/include/asm/cp15.h | 50 +++++++++++++++++++++++++++++++++++++
arch/arm/kvm/hyp/cp15-sr.c | 12 ++++-----
arch/arm/kvm/hyp/switch.c | 6 ++---
3 files changed, 59 insertions(+), 9 deletions(-)
diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
index 89b6663f2863..0bd8287b39fa 100644
--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -42,6 +42,8 @@
#ifndef __ASSEMBLY__
+#include <linux/stringify.h>
+
#if __LINUX_ARM_ARCH__ >= 4
#define vectors_high() (get_cr() & CR_V)
#else
@@ -129,6 +131,54 @@
extern unsigned long cr_alignment; /* defined in entry-armv.S */
+static inline void set_par(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, PAR_64);
+ else
+ write_sysreg(val, PAR_32);
+}
+
+static inline u64 get_par(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(PAR_64);
+ else
+ return read_sysreg(PAR_32);
+}
+
+static inline void set_ttbr0(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR0_64);
+ else
+ write_sysreg(val, TTBR0_32);
+}
+
+static inline u64 get_ttbr0(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR0_64);
+ else
+ return read_sysreg(TTBR0_32);
+}
+
+static inline void set_ttbr1(u64 val)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ write_sysreg(val, TTBR1_64);
+ else
+ write_sysreg(val, TTBR1_32);
+}
+
+static inline u64 get_ttbr1(void)
+{
+ if (IS_ENABLED(CONFIG_ARM_LPAE))
+ return read_sysreg(TTBR1_64);
+ else
+ return read_sysreg(TTBR1_32);
+}
+
static inline unsigned long get_cr(void)
{
unsigned long val;
diff --git a/arch/arm/kvm/hyp/cp15-sr.c b/arch/arm/kvm/hyp/cp15-sr.c
index e6923306f698..b2b9bb0a08b8 100644
--- a/arch/arm/kvm/hyp/cp15-sr.c
+++ b/arch/arm/kvm/hyp/cp15-sr.c
@@ -19,8 +19,8 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
ctxt->cp15[c0_CSSELR] = read_sysreg(CSSELR);
ctxt->cp15[c1_SCTLR] = read_sysreg(SCTLR);
ctxt->cp15[c1_CPACR] = read_sysreg(CPACR);
- *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0);
- *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1);
+ *cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0_64);
+ *cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1_64);
ctxt->cp15[c2_TTBCR] = read_sysreg(TTBCR);
ctxt->cp15[c3_DACR] = read_sysreg(DACR);
ctxt->cp15[c5_DFSR] = read_sysreg(DFSR);
@@ -29,7 +29,7 @@ void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
ctxt->cp15[c5_AIFSR] = read_sysreg(AIFSR);
ctxt->cp15[c6_DFAR] = read_sysreg(DFAR);
ctxt->cp15[c6_IFAR] = read_sysreg(IFAR);
- *cp15_64(ctxt, c7_PAR) = read_sysreg(PAR);
+ *cp15_64(ctxt, c7_PAR) = read_sysreg(PAR_64);
ctxt->cp15[c10_PRRR] = read_sysreg(PRRR);
ctxt->cp15[c10_NMRR] = read_sysreg(NMRR);
ctxt->cp15[c10_AMAIR0] = read_sysreg(AMAIR0);
@@ -48,8 +48,8 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->cp15[c0_CSSELR], CSSELR);
write_sysreg(ctxt->cp15[c1_SCTLR], SCTLR);
write_sysreg(ctxt->cp15[c1_CPACR], CPACR);
- write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0);
- write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0_64);
+ write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1_64);
write_sysreg(ctxt->cp15[c2_TTBCR], TTBCR);
write_sysreg(ctxt->cp15[c3_DACR], DACR);
write_sysreg(ctxt->cp15[c5_DFSR], DFSR);
@@ -58,7 +58,7 @@ void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
write_sysreg(ctxt->cp15[c5_AIFSR], AIFSR);
write_sysreg(ctxt->cp15[c6_DFAR], DFAR);
write_sysreg(ctxt->cp15[c6_IFAR], IFAR);
- write_sysreg(*cp15_64(ctxt, c7_PAR), PAR);
+ write_sysreg(*cp15_64(ctxt, c7_PAR), PAR_64);
write_sysreg(ctxt->cp15[c10_PRRR], PRRR);
write_sysreg(ctxt->cp15[c10_NMRR], NMRR);
write_sysreg(ctxt->cp15[c10_AMAIR0], AMAIR0);
diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c
index 1efeef3fd0ee..581277ef44d3 100644
--- a/arch/arm/kvm/hyp/switch.c
+++ b/arch/arm/kvm/hyp/switch.c
@@ -123,12 +123,12 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
if (!(hsr & HSR_DABT_S1PTW) && (hsr & HSR_FSC_TYPE) == FSC_PERM) {
u64 par, tmp;
- par = read_sysreg(PAR);
+ par = read_sysreg(PAR_64);
write_sysreg(far, ATS1CPR);
isb();
- tmp = read_sysreg(PAR);
- write_sysreg(par, PAR);
+ tmp = read_sysreg(PAR_64);
+ write_sysreg(par, PAR_64);
if (unlikely(tmp & 1))
return false; /* Translation failed, back to guest */
--
2.17.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 3/7] ARM: Disable instrumentation for some code
2020-01-17 22:48 ` Florian Fainelli
(?)
@ 2020-01-17 22:48 ` Florian Fainelli
-1 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Andrey Ryabinin, Abbott Liu, Florian Fainelli,
bcm-kernel-feedback-list, glider, dvyukov, corbet, linux,
christoffer.dall, marc.zyngier, arnd, nico, vladimir.murzin,
keescook, jinb.park7, alexandre.belloni, ard.biesheuvel,
daniel.lezcano, pombredanne, rob, gregkh, akpm, mark.rutland,
catalin.marinas, yamada.masahiro, tglx, thgarnie, dhowells,
geert, andre.przywara, julien.thierry, drjones, philip, mhocko,
kirill.shutemov, kasan-dev, linux-doc, linux-kernel, kvmarm,
ryabinin.a.a
From: Andrey Ryabinin <aryabinin@virtuozzo.com>
Disable instrumentation for arch/arm/boot/compressed/* and
arch/arm/vdso/* because that code would not linkd with kernel image.
Disable instrumentation for arch/arm/kvm/hyp/*. See commit a6cdf1c08cbf
("kvm: arm64: Disable compiler instrumentation for hypervisor code") for
more details.
Disable instrumentation for arch/arm/mm/physaddr.c. See commit
ec6d06efb0ba ("arm64: Add support for CONFIG_DEBUG_VIRTUAL") for more
details.
Disable kasan check in the function unwind_pop_register because it does
not matter that kasan checks failed when unwind_pop_register read stack
memory of task.
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Reported-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/boot/compressed/Makefile | 1 +
arch/arm/kernel/unwind.c | 6 +++++-
arch/arm/mm/Makefile | 1 +
arch/arm/vdso/Makefile | 2 ++
4 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index a1e883c5e5c4..83991a0447fa 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -24,6 +24,7 @@ OBJS += hyp-stub.o
endif
GCOV_PROFILE := n
+KASAN_SANITIZE := n
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
KCOV_INSTRUMENT := n
diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
index 4574e6aea0a5..f73601416f90 100644
--- a/arch/arm/kernel/unwind.c
+++ b/arch/arm/kernel/unwind.c
@@ -236,7 +236,11 @@ static int unwind_pop_register(struct unwind_ctrl_block *ctrl,
if (*vsp >= (unsigned long *)ctrl->sp_high)
return -URC_FAILURE;
- ctrl->vrs[reg] = *(*vsp)++;
+ /* Use READ_ONCE_NOCHECK here to avoid this memory access
+ * from being tracked by KASAN.
+ */
+ ctrl->vrs[reg] = READ_ONCE_NOCHECK(*(*vsp));
+ (*vsp)++;
return URC_OK;
}
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 7cb1699fbfc4..432302911d6e 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -16,6 +16,7 @@ endif
obj-$(CONFIG_ARM_PTDUMP_CORE) += dump.o
obj-$(CONFIG_ARM_PTDUMP_DEBUGFS) += ptdump_debugfs.o
obj-$(CONFIG_MODULES) += proc-syms.o
+KASAN_SANITIZE_physaddr.o := n
obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o
obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o
diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
index 0fda344beb0b..1f76a5ff6e49 100644
--- a/arch/arm/vdso/Makefile
+++ b/arch/arm/vdso/Makefile
@@ -42,6 +42,8 @@ GCOV_PROFILE := n
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
KCOV_INSTRUMENT := n
+KASAN_SANITIZE := n
+
# Force dependency
$(obj)/vdso.o : $(obj)/vdso.so
--
2.17.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 3/7] ARM: Disable instrumentation for some code
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: alexandre.belloni, mhocko, catalin.marinas, linux-kernel,
dhowells, yamada.masahiro, ryabinin.a.a, glider, kvmarm,
Florian Fainelli, corbet, Abbott Liu, daniel.lezcano, linux,
kasan-dev, bcm-kernel-feedback-list, Andrey Ryabinin, keescook,
arnd, marc.zyngier, andre.przywara, philip, jinb.park7, tglx,
dvyukov, nico, gregkh, ard.biesheuvel, linux-doc, geert, rob,
pombredanne, akpm, thgarnie, kirill.shutemov
From: Andrey Ryabinin <aryabinin@virtuozzo.com>
Disable instrumentation for arch/arm/boot/compressed/* and
arch/arm/vdso/* because that code would not linkd with kernel image.
Disable instrumentation for arch/arm/kvm/hyp/*. See commit a6cdf1c08cbf
("kvm: arm64: Disable compiler instrumentation for hypervisor code") for
more details.
Disable instrumentation for arch/arm/mm/physaddr.c. See commit
ec6d06efb0ba ("arm64: Add support for CONFIG_DEBUG_VIRTUAL") for more
details.
Disable kasan check in the function unwind_pop_register because it does
not matter that kasan checks failed when unwind_pop_register read stack
memory of task.
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Reported-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/boot/compressed/Makefile | 1 +
arch/arm/kernel/unwind.c | 6 +++++-
arch/arm/mm/Makefile | 1 +
arch/arm/vdso/Makefile | 2 ++
4 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index a1e883c5e5c4..83991a0447fa 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -24,6 +24,7 @@ OBJS += hyp-stub.o
endif
GCOV_PROFILE := n
+KASAN_SANITIZE := n
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
KCOV_INSTRUMENT := n
diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
index 4574e6aea0a5..f73601416f90 100644
--- a/arch/arm/kernel/unwind.c
+++ b/arch/arm/kernel/unwind.c
@@ -236,7 +236,11 @@ static int unwind_pop_register(struct unwind_ctrl_block *ctrl,
if (*vsp >= (unsigned long *)ctrl->sp_high)
return -URC_FAILURE;
- ctrl->vrs[reg] = *(*vsp)++;
+ /* Use READ_ONCE_NOCHECK here to avoid this memory access
+ * from being tracked by KASAN.
+ */
+ ctrl->vrs[reg] = READ_ONCE_NOCHECK(*(*vsp));
+ (*vsp)++;
return URC_OK;
}
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 7cb1699fbfc4..432302911d6e 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -16,6 +16,7 @@ endif
obj-$(CONFIG_ARM_PTDUMP_CORE) += dump.o
obj-$(CONFIG_ARM_PTDUMP_DEBUGFS) += ptdump_debugfs.o
obj-$(CONFIG_MODULES) += proc-syms.o
+KASAN_SANITIZE_physaddr.o := n
obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o
obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o
diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
index 0fda344beb0b..1f76a5ff6e49 100644
--- a/arch/arm/vdso/Makefile
+++ b/arch/arm/vdso/Makefile
@@ -42,6 +42,8 @@ GCOV_PROFILE := n
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
KCOV_INSTRUMENT := n
+KASAN_SANITIZE := n
+
# Force dependency
$(obj)/vdso.o : $(obj)/vdso.so
--
2.17.1
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 3/7] ARM: Disable instrumentation for some code
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: mark.rutland, alexandre.belloni, mhocko, julien.thierry,
catalin.marinas, linux-kernel, dhowells, yamada.masahiro,
ryabinin.a.a, glider, kvmarm, Florian Fainelli, corbet,
Abbott Liu, daniel.lezcano, linux, kasan-dev,
bcm-kernel-feedback-list, Andrey Ryabinin, drjones,
vladimir.murzin, keescook, arnd, marc.zyngier, andre.przywara,
philip, jinb.park7, tglx, dvyukov, nico, gregkh, ard.biesheuvel,
linux-doc, christoffer.dall, geert, rob, pombredanne, akpm,
thgarnie, kirill.shutemov
From: Andrey Ryabinin <aryabinin@virtuozzo.com>
Disable instrumentation for arch/arm/boot/compressed/* and
arch/arm/vdso/* because that code would not linkd with kernel image.
Disable instrumentation for arch/arm/kvm/hyp/*. See commit a6cdf1c08cbf
("kvm: arm64: Disable compiler instrumentation for hypervisor code") for
more details.
Disable instrumentation for arch/arm/mm/physaddr.c. See commit
ec6d06efb0ba ("arm64: Add support for CONFIG_DEBUG_VIRTUAL") for more
details.
Disable kasan check in the function unwind_pop_register because it does
not matter that kasan checks failed when unwind_pop_register read stack
memory of task.
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Reported-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/boot/compressed/Makefile | 1 +
arch/arm/kernel/unwind.c | 6 +++++-
arch/arm/mm/Makefile | 1 +
arch/arm/vdso/Makefile | 2 ++
4 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index a1e883c5e5c4..83991a0447fa 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -24,6 +24,7 @@ OBJS += hyp-stub.o
endif
GCOV_PROFILE := n
+KASAN_SANITIZE := n
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
KCOV_INSTRUMENT := n
diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
index 4574e6aea0a5..f73601416f90 100644
--- a/arch/arm/kernel/unwind.c
+++ b/arch/arm/kernel/unwind.c
@@ -236,7 +236,11 @@ static int unwind_pop_register(struct unwind_ctrl_block *ctrl,
if (*vsp >= (unsigned long *)ctrl->sp_high)
return -URC_FAILURE;
- ctrl->vrs[reg] = *(*vsp)++;
+ /* Use READ_ONCE_NOCHECK here to avoid this memory access
+ * from being tracked by KASAN.
+ */
+ ctrl->vrs[reg] = READ_ONCE_NOCHECK(*(*vsp));
+ (*vsp)++;
return URC_OK;
}
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 7cb1699fbfc4..432302911d6e 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -16,6 +16,7 @@ endif
obj-$(CONFIG_ARM_PTDUMP_CORE) += dump.o
obj-$(CONFIG_ARM_PTDUMP_DEBUGFS) += ptdump_debugfs.o
obj-$(CONFIG_MODULES) += proc-syms.o
+KASAN_SANITIZE_physaddr.o := n
obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o
obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o
diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
index 0fda344beb0b..1f76a5ff6e49 100644
--- a/arch/arm/vdso/Makefile
+++ b/arch/arm/vdso/Makefile
@@ -42,6 +42,8 @@ GCOV_PROFILE := n
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
KCOV_INSTRUMENT := n
+KASAN_SANITIZE := n
+
# Force dependency
$(obj)/vdso.o : $(obj)/vdso.so
--
2.17.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 4/7] ARM: Replace memory function for kasan
2020-01-17 22:48 ` Florian Fainelli
(?)
@ 2020-01-17 22:48 ` Florian Fainelli
-1 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Andrey Ryabinin, Abbott Liu, Florian Fainelli,
bcm-kernel-feedback-list, glider, dvyukov, corbet, linux,
christoffer.dall, marc.zyngier, arnd, nico, vladimir.murzin,
keescook, jinb.park7, alexandre.belloni, ard.biesheuvel,
daniel.lezcano, pombredanne, rob, gregkh, akpm, mark.rutland,
catalin.marinas, yamada.masahiro, tglx, thgarnie, dhowells,
geert, andre.przywara, julien.thierry, drjones, philip, mhocko,
kirill.shutemov, kasan-dev, linux-doc, linux-kernel, kvmarm,
ryabinin.a.a
From: Andrey Ryabinin <aryabinin@virtuozzo.com>
Functions like memset/memmove/memcpy do a lot of memory accesses. If a
bad pointer pis assed to one of these function it is important to catch
this. Compiler instrumentation cannot do this since these functions are
written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.
We must use __memcpy/__memset to replace memcpy/memset when we copy
.data to RAM and when we clear .bss, because kasan_early_init cannot be
called before the initialization of .data and .bss.
Reported-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/include/asm/string.h | 17 +++++++++++++++++
arch/arm/kernel/head-common.S | 4 ++--
arch/arm/lib/memcpy.S | 3 +++
arch/arm/lib/memmove.S | 5 ++++-
arch/arm/lib/memset.S | 3 +++
5 files changed, 29 insertions(+), 3 deletions(-)
diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index 111a1d8a41dd..1f9016bbf153 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -15,15 +15,18 @@ extern char * strchr(const char * s, int c);
#define __HAVE_ARCH_MEMCPY
extern void * memcpy(void *, const void *, __kernel_size_t);
+extern void *__memcpy(void *dest, const void *src, __kernel_size_t n);
#define __HAVE_ARCH_MEMMOVE
extern void * memmove(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *dest, const void *src, __kernel_size_t n);
#define __HAVE_ARCH_MEMCHR
extern void * memchr(const void *, int, __kernel_size_t);
#define __HAVE_ARCH_MEMSET
extern void * memset(void *, int, __kernel_size_t);
+extern void *__memset(void *s, int c, __kernel_size_t n);
#define __HAVE_ARCH_MEMSET32
extern void *__memset32(uint32_t *, uint32_t v, __kernel_size_t);
@@ -39,4 +42,18 @@ static inline void *memset64(uint64_t *p, uint64_t v, __kernel_size_t n)
return __memset64(p, v, n * 8, v >> 32);
}
+
+
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
#endif
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
index 4a3982812a40..6840c7c60a85 100644
--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -95,7 +95,7 @@ __mmap_switched:
THUMB( ldmia r4!, {r0, r1, r2, r3} )
THUMB( mov sp, r3 )
sub r2, r2, r1
- bl memcpy @ copy .data to RAM
+ bl __memcpy @ copy .data to RAM
#endif
ARM( ldmia r4!, {r0, r1, sp} )
@@ -103,7 +103,7 @@ __mmap_switched:
THUMB( mov sp, r3 )
sub r2, r1, r0
mov r1, #0
- bl memset @ clear .bss
+ bl __memset @ clear .bss
ldmia r4, {r0, r1, r2, r3}
str r9, [r0] @ Save processor ID
diff --git a/arch/arm/lib/memcpy.S b/arch/arm/lib/memcpy.S
index 09a333153dc6..ad4625d16e11 100644
--- a/arch/arm/lib/memcpy.S
+++ b/arch/arm/lib/memcpy.S
@@ -58,6 +58,8 @@
/* Prototype: void *memcpy(void *dest, const void *src, size_t n); */
+.weak memcpy
+ENTRY(__memcpy)
ENTRY(mmiocpy)
ENTRY(memcpy)
@@ -65,3 +67,4 @@ ENTRY(memcpy)
ENDPROC(memcpy)
ENDPROC(mmiocpy)
+ENDPROC(__memcpy)
diff --git a/arch/arm/lib/memmove.S b/arch/arm/lib/memmove.S
index b50e5770fb44..fd123ea5a5a4 100644
--- a/arch/arm/lib/memmove.S
+++ b/arch/arm/lib/memmove.S
@@ -24,12 +24,14 @@
* occurring in the opposite direction.
*/
+.weak memmove
+ENTRY(__memmove)
ENTRY(memmove)
UNWIND( .fnstart )
subs ip, r0, r1
cmphi r2, ip
- bls memcpy
+ bls __memcpy
stmfd sp!, {r0, r4, lr}
UNWIND( .fnend )
@@ -222,3 +224,4 @@ ENTRY(memmove)
18: backward_copy_shift push=24 pull=8
ENDPROC(memmove)
+ENDPROC(__memmove)
diff --git a/arch/arm/lib/memset.S b/arch/arm/lib/memset.S
index 6ca4535c47fb..0e7ff0423f50 100644
--- a/arch/arm/lib/memset.S
+++ b/arch/arm/lib/memset.S
@@ -13,6 +13,8 @@
.text
.align 5
+.weak memset
+ENTRY(__memset)
ENTRY(mmioset)
ENTRY(memset)
UNWIND( .fnstart )
@@ -132,6 +134,7 @@ UNWIND( .fnstart )
UNWIND( .fnend )
ENDPROC(memset)
ENDPROC(mmioset)
+ENDPROC(__memset)
ENTRY(__memset32)
UNWIND( .fnstart )
--
2.17.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 4/7] ARM: Replace memory function for kasan
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: alexandre.belloni, mhocko, catalin.marinas, linux-kernel,
dhowells, yamada.masahiro, ryabinin.a.a, glider, kvmarm,
Florian Fainelli, corbet, Abbott Liu, daniel.lezcano, linux,
kasan-dev, bcm-kernel-feedback-list, Andrey Ryabinin, keescook,
arnd, marc.zyngier, andre.przywara, philip, jinb.park7, tglx,
dvyukov, nico, gregkh, ard.biesheuvel, linux-doc, geert, rob,
pombredanne, akpm, thgarnie, kirill.shutemov
From: Andrey Ryabinin <aryabinin@virtuozzo.com>
Functions like memset/memmove/memcpy do a lot of memory accesses. If a
bad pointer pis assed to one of these function it is important to catch
this. Compiler instrumentation cannot do this since these functions are
written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.
We must use __memcpy/__memset to replace memcpy/memset when we copy
.data to RAM and when we clear .bss, because kasan_early_init cannot be
called before the initialization of .data and .bss.
Reported-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/include/asm/string.h | 17 +++++++++++++++++
arch/arm/kernel/head-common.S | 4 ++--
arch/arm/lib/memcpy.S | 3 +++
arch/arm/lib/memmove.S | 5 ++++-
arch/arm/lib/memset.S | 3 +++
5 files changed, 29 insertions(+), 3 deletions(-)
diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index 111a1d8a41dd..1f9016bbf153 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -15,15 +15,18 @@ extern char * strchr(const char * s, int c);
#define __HAVE_ARCH_MEMCPY
extern void * memcpy(void *, const void *, __kernel_size_t);
+extern void *__memcpy(void *dest, const void *src, __kernel_size_t n);
#define __HAVE_ARCH_MEMMOVE
extern void * memmove(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *dest, const void *src, __kernel_size_t n);
#define __HAVE_ARCH_MEMCHR
extern void * memchr(const void *, int, __kernel_size_t);
#define __HAVE_ARCH_MEMSET
extern void * memset(void *, int, __kernel_size_t);
+extern void *__memset(void *s, int c, __kernel_size_t n);
#define __HAVE_ARCH_MEMSET32
extern void *__memset32(uint32_t *, uint32_t v, __kernel_size_t);
@@ -39,4 +42,18 @@ static inline void *memset64(uint64_t *p, uint64_t v, __kernel_size_t n)
return __memset64(p, v, n * 8, v >> 32);
}
+
+
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
#endif
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
index 4a3982812a40..6840c7c60a85 100644
--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -95,7 +95,7 @@ __mmap_switched:
THUMB( ldmia r4!, {r0, r1, r2, r3} )
THUMB( mov sp, r3 )
sub r2, r2, r1
- bl memcpy @ copy .data to RAM
+ bl __memcpy @ copy .data to RAM
#endif
ARM( ldmia r4!, {r0, r1, sp} )
@@ -103,7 +103,7 @@ __mmap_switched:
THUMB( mov sp, r3 )
sub r2, r1, r0
mov r1, #0
- bl memset @ clear .bss
+ bl __memset @ clear .bss
ldmia r4, {r0, r1, r2, r3}
str r9, [r0] @ Save processor ID
diff --git a/arch/arm/lib/memcpy.S b/arch/arm/lib/memcpy.S
index 09a333153dc6..ad4625d16e11 100644
--- a/arch/arm/lib/memcpy.S
+++ b/arch/arm/lib/memcpy.S
@@ -58,6 +58,8 @@
/* Prototype: void *memcpy(void *dest, const void *src, size_t n); */
+.weak memcpy
+ENTRY(__memcpy)
ENTRY(mmiocpy)
ENTRY(memcpy)
@@ -65,3 +67,4 @@ ENTRY(memcpy)
ENDPROC(memcpy)
ENDPROC(mmiocpy)
+ENDPROC(__memcpy)
diff --git a/arch/arm/lib/memmove.S b/arch/arm/lib/memmove.S
index b50e5770fb44..fd123ea5a5a4 100644
--- a/arch/arm/lib/memmove.S
+++ b/arch/arm/lib/memmove.S
@@ -24,12 +24,14 @@
* occurring in the opposite direction.
*/
+.weak memmove
+ENTRY(__memmove)
ENTRY(memmove)
UNWIND( .fnstart )
subs ip, r0, r1
cmphi r2, ip
- bls memcpy
+ bls __memcpy
stmfd sp!, {r0, r4, lr}
UNWIND( .fnend )
@@ -222,3 +224,4 @@ ENTRY(memmove)
18: backward_copy_shift push=24 pull=8
ENDPROC(memmove)
+ENDPROC(__memmove)
diff --git a/arch/arm/lib/memset.S b/arch/arm/lib/memset.S
index 6ca4535c47fb..0e7ff0423f50 100644
--- a/arch/arm/lib/memset.S
+++ b/arch/arm/lib/memset.S
@@ -13,6 +13,8 @@
.text
.align 5
+.weak memset
+ENTRY(__memset)
ENTRY(mmioset)
ENTRY(memset)
UNWIND( .fnstart )
@@ -132,6 +134,7 @@ UNWIND( .fnstart )
UNWIND( .fnend )
ENDPROC(memset)
ENDPROC(mmioset)
+ENDPROC(__memset)
ENTRY(__memset32)
UNWIND( .fnstart )
--
2.17.1
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 4/7] ARM: Replace memory function for kasan
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: mark.rutland, alexandre.belloni, mhocko, julien.thierry,
catalin.marinas, linux-kernel, dhowells, yamada.masahiro,
ryabinin.a.a, glider, kvmarm, Florian Fainelli, corbet,
Abbott Liu, daniel.lezcano, linux, kasan-dev,
bcm-kernel-feedback-list, Andrey Ryabinin, drjones,
vladimir.murzin, keescook, arnd, marc.zyngier, andre.przywara,
philip, jinb.park7, tglx, dvyukov, nico, gregkh, ard.biesheuvel,
linux-doc, christoffer.dall, geert, rob, pombredanne, akpm,
thgarnie, kirill.shutemov
From: Andrey Ryabinin <aryabinin@virtuozzo.com>
Functions like memset/memmove/memcpy do a lot of memory accesses. If a
bad pointer pis assed to one of these function it is important to catch
this. Compiler instrumentation cannot do this since these functions are
written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.
We must use __memcpy/__memset to replace memcpy/memset when we copy
.data to RAM and when we clear .bss, because kasan_early_init cannot be
called before the initialization of .data and .bss.
Reported-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/include/asm/string.h | 17 +++++++++++++++++
arch/arm/kernel/head-common.S | 4 ++--
arch/arm/lib/memcpy.S | 3 +++
arch/arm/lib/memmove.S | 5 ++++-
arch/arm/lib/memset.S | 3 +++
5 files changed, 29 insertions(+), 3 deletions(-)
diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index 111a1d8a41dd..1f9016bbf153 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -15,15 +15,18 @@ extern char * strchr(const char * s, int c);
#define __HAVE_ARCH_MEMCPY
extern void * memcpy(void *, const void *, __kernel_size_t);
+extern void *__memcpy(void *dest, const void *src, __kernel_size_t n);
#define __HAVE_ARCH_MEMMOVE
extern void * memmove(void *, const void *, __kernel_size_t);
+extern void *__memmove(void *dest, const void *src, __kernel_size_t n);
#define __HAVE_ARCH_MEMCHR
extern void * memchr(const void *, int, __kernel_size_t);
#define __HAVE_ARCH_MEMSET
extern void * memset(void *, int, __kernel_size_t);
+extern void *__memset(void *s, int c, __kernel_size_t n);
#define __HAVE_ARCH_MEMSET32
extern void *__memset32(uint32_t *, uint32_t v, __kernel_size_t);
@@ -39,4 +42,18 @@ static inline void *memset64(uint64_t *p, uint64_t v, __kernel_size_t n)
return __memset64(p, v, n * 8, v >> 32);
}
+
+
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+
+/*
+ * For files that not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+
#endif
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
index 4a3982812a40..6840c7c60a85 100644
--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -95,7 +95,7 @@ __mmap_switched:
THUMB( ldmia r4!, {r0, r1, r2, r3} )
THUMB( mov sp, r3 )
sub r2, r2, r1
- bl memcpy @ copy .data to RAM
+ bl __memcpy @ copy .data to RAM
#endif
ARM( ldmia r4!, {r0, r1, sp} )
@@ -103,7 +103,7 @@ __mmap_switched:
THUMB( mov sp, r3 )
sub r2, r1, r0
mov r1, #0
- bl memset @ clear .bss
+ bl __memset @ clear .bss
ldmia r4, {r0, r1, r2, r3}
str r9, [r0] @ Save processor ID
diff --git a/arch/arm/lib/memcpy.S b/arch/arm/lib/memcpy.S
index 09a333153dc6..ad4625d16e11 100644
--- a/arch/arm/lib/memcpy.S
+++ b/arch/arm/lib/memcpy.S
@@ -58,6 +58,8 @@
/* Prototype: void *memcpy(void *dest, const void *src, size_t n); */
+.weak memcpy
+ENTRY(__memcpy)
ENTRY(mmiocpy)
ENTRY(memcpy)
@@ -65,3 +67,4 @@ ENTRY(memcpy)
ENDPROC(memcpy)
ENDPROC(mmiocpy)
+ENDPROC(__memcpy)
diff --git a/arch/arm/lib/memmove.S b/arch/arm/lib/memmove.S
index b50e5770fb44..fd123ea5a5a4 100644
--- a/arch/arm/lib/memmove.S
+++ b/arch/arm/lib/memmove.S
@@ -24,12 +24,14 @@
* occurring in the opposite direction.
*/
+.weak memmove
+ENTRY(__memmove)
ENTRY(memmove)
UNWIND( .fnstart )
subs ip, r0, r1
cmphi r2, ip
- bls memcpy
+ bls __memcpy
stmfd sp!, {r0, r4, lr}
UNWIND( .fnend )
@@ -222,3 +224,4 @@ ENTRY(memmove)
18: backward_copy_shift push=24 pull=8
ENDPROC(memmove)
+ENDPROC(__memmove)
diff --git a/arch/arm/lib/memset.S b/arch/arm/lib/memset.S
index 6ca4535c47fb..0e7ff0423f50 100644
--- a/arch/arm/lib/memset.S
+++ b/arch/arm/lib/memset.S
@@ -13,6 +13,8 @@
.text
.align 5
+.weak memset
+ENTRY(__memset)
ENTRY(mmioset)
ENTRY(memset)
UNWIND( .fnstart )
@@ -132,6 +134,7 @@ UNWIND( .fnstart )
UNWIND( .fnend )
ENDPROC(memset)
ENDPROC(mmioset)
+ENDPROC(__memset)
ENTRY(__memset32)
UNWIND( .fnstart )
--
2.17.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 5/7] ARM: Define the virtual space of KASan's shadow region
2020-01-17 22:48 ` Florian Fainelli
(?)
@ 2020-01-17 22:48 ` Florian Fainelli
-1 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Abbott Liu, Andrey Ryabinin, Florian Fainelli,
bcm-kernel-feedback-list, glider, dvyukov, corbet, linux,
christoffer.dall, marc.zyngier, arnd, nico, vladimir.murzin,
keescook, jinb.park7, alexandre.belloni, ard.biesheuvel,
daniel.lezcano, pombredanne, rob, gregkh, akpm, mark.rutland,
catalin.marinas, yamada.masahiro, tglx, thgarnie, dhowells,
geert, andre.przywara, julien.thierry, drjones, philip, mhocko,
kirill.shutemov, kasan-dev, linux-doc, linux-kernel, kvmarm,
ryabinin.a.a
From: Abbott Liu <liuwenliang@huawei.com>
Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for arm
kernel address sanitizer.
+----+ 0xffffffff
| |
| |
| |
+----+ CONFIG_PAGE_OFFSET
| | | | |-> module virtual address space area.
| |/
+----+ MODULE_VADDR = KASAN_SHADOW_END
| | | | |-> the shadow area of kernel virtual address.
| |/
+----+ TASK_SIZE(start of kernel space) = KASAN_SHADOW_START the
| |\ shadow address of MODULE_VADDR
| | ---------------------+
| | |
+ + KASAN_SHADOW_OFFSET |-> the user space area. Kernel address
| | | sanitizer do not use this space.
| | ---------------------+
| |/
------ 0
1)KASAN_SHADOW_OFFSET:
This value is used to map an address to the corresponding shadow
address by the following formula:
shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
2)KASAN_SHADOW_START
This value is the MODULE_VADDR's shadow address. It is the start
of kernel virtual space.
3)KASAN_SHADOW_END
This value is the 0x100000000's shadow address. It is the end of
kernel address sanitizer shadow area. It is also the start of the
module area.
When kasan is enabled, the definition of TASK_SIZE is not an 8-bit
rotated constant, so we need to modify the TASK_SIZE access code in the
*.s file.
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/include/asm/kasan_def.h | 63 ++++++++++++++++++++++++++++++++
arch/arm/include/asm/memory.h | 5 +++
arch/arm/kernel/entry-armv.S | 5 ++-
arch/arm/kernel/entry-common.S | 9 +++--
arch/arm/mm/mmu.c | 7 +++-
5 files changed, 83 insertions(+), 6 deletions(-)
create mode 100644 arch/arm/include/asm/kasan_def.h
diff --git a/arch/arm/include/asm/kasan_def.h b/arch/arm/include/asm/kasan_def.h
new file mode 100644
index 000000000000..4f0aa9869f54
--- /dev/null
+++ b/arch/arm/include/asm/kasan_def.h
@@ -0,0 +1,63 @@
+/*
+ * arch/arm/include/asm/kasan_def.h
+ *
+ * Copyright (c) 2018 Huawei Technologies Co., Ltd.
+ *
+ * Author: Abbott Liu <liuwenliang@huawei.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __ASM_KASAN_DEF_H
+#define __ASM_KASAN_DEF_H
+
+#ifdef CONFIG_KASAN
+
+/*
+ * +----+ 0xffffffff
+ * | |
+ * | |
+ * | |
+ * +----+ CONFIG_PAGE_OFFSET
+ * | |\
+ * | | |-> module virtual address space area.
+ * | |/
+ * +----+ MODULE_VADDR = KASAN_SHADOW_END
+ * | |\
+ * | | |-> the shadow area of kernel virtual address.
+ * | |/
+ * +----+ TASK_SIZE(start of kernel space) = KASAN_SHADOW_START the
+ * | |\ shadow address of MODULE_VADDR
+ * | | ---------------------+
+ * | | |
+ * + + KASAN_SHADOW_OFFSET |-> the user space area. Kernel address
+ * | | | sanitizer do not use this space.
+ * | | ---------------------+
+ * | |/
+ * ------ 0
+ *
+ *1) KASAN_SHADOW_OFFSET:
+ * This value is used to map an address to the corresponding shadow address by
+ * the following formula: shadow_addr = (address >> KASAN_SHADOW_SCALE_SHIFT) +
+ * KASAN_SHADOW_OFFSET;
+ *
+ * 2) KASAN_SHADOW_START:
+ * This value is the MODULE_VADDR's shadow address. It is the start of kernel
+ * virtual space.
+ *
+ * 3) KASAN_SHADOW_END
+ * This value is the 0x100000000's shadow address. It is the end of kernel
+ * addresssanitizer's shadow area. It is also the start of the module area.
+ *
+ */
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#define KASAN_SHADOW_END ((UL(1) << (32 - KASAN_SHADOW_SCALE_SHIFT)) \
+ + KASAN_SHADOW_OFFSET)
+#define KASAN_SHADOW_START ((KASAN_SHADOW_END >> 3) + KASAN_SHADOW_OFFSET)
+
+#endif
+#endif
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 99035b5891ef..5cfa9e5dc733 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -18,6 +18,7 @@
#ifdef CONFIG_NEED_MACH_MEMORY_H
#include <mach/memory.h>
#endif
+#include <asm/kasan_def.h>
/* PAGE_OFFSET - the virtual address of the start of the kernel image */
#define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET)
@@ -28,7 +29,11 @@
* TASK_SIZE - the maximum size of a user space task.
* TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area
*/
+#ifndef CONFIG_KASAN
#define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M))
+#else
+#define TASK_SIZE (KASAN_SHADOW_START)
+#endif
#define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_16M)
/*
diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index 858d4e541532..8cf1cc30fa7f 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -180,7 +180,7 @@ ENDPROC(__und_invalid)
get_thread_info tsk
ldr r0, [tsk, #TI_ADDR_LIMIT]
- mov r1, #TASK_SIZE
+ ldr r1, =TASK_SIZE
str r1, [tsk, #TI_ADDR_LIMIT]
str r0, [sp, #SVC_ADDR_LIMIT]
@@ -434,7 +434,8 @@ ENDPROC(__fiq_abt)
@ if it was interrupted in a critical region. Here we
@ perform a quick test inline since it should be false
@ 99.9999% of the time. The rest is done out of line.
- cmp r4, #TASK_SIZE
+ ldr r0, =TASK_SIZE
+ cmp r4, r0
blhs kuser_cmpxchg64_fixup
#endif
#endif
diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
index 271cb8a1eba1..fee279e28a72 100644
--- a/arch/arm/kernel/entry-common.S
+++ b/arch/arm/kernel/entry-common.S
@@ -50,7 +50,8 @@ __ret_fast_syscall:
UNWIND(.cantunwind )
disable_irq_notrace @ disable interrupts
ldr r2, [tsk, #TI_ADDR_LIMIT]
- cmp r2, #TASK_SIZE
+ ldr r1, =TASK_SIZE
+ cmp r2, r1
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
@@ -87,7 +88,8 @@ __ret_fast_syscall:
#endif
disable_irq_notrace @ disable interrupts
ldr r2, [tsk, #TI_ADDR_LIMIT]
- cmp r2, #TASK_SIZE
+ ldr r1, =TASK_SIZE
+ cmp r2, r1
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
@@ -128,7 +130,8 @@ ret_slow_syscall:
disable_irq_notrace @ disable interrupts
ENTRY(ret_to_user_from_irq)
ldr r2, [tsk, #TI_ADDR_LIMIT]
- cmp r2, #TASK_SIZE
+ ldr r1, =TASK_SIZE
+ cmp r2, r1
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS]
tst r1, #_TIF_WORK_MASK
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 5d0d0f86e790..d05493ec7d7f 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1272,9 +1272,14 @@ static inline void prepare_page_table(void)
/*
* Clear out all the mappings below the kernel image.
*/
- for (addr = 0; addr < MODULES_VADDR; addr += PMD_SIZE)
+ for (addr = 0; addr < TASK_SIZE; addr += PMD_SIZE)
pmd_clear(pmd_off_k(addr));
+#ifdef CONFIG_KASAN
+ /*TASK_SIZE ~ MODULES_VADDR is the KASAN's shadow area -- skip over it*/
+ addr = MODULES_VADDR;
+#endif
+
#ifdef CONFIG_XIP_KERNEL
/* The XIP kernel is mapped in the module area -- skip over it */
addr = ((unsigned long)_exiprom + PMD_SIZE - 1) & PMD_MASK;
--
2.17.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 5/7] ARM: Define the virtual space of KASan's shadow region
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: alexandre.belloni, mhocko, catalin.marinas, linux-kernel,
dhowells, yamada.masahiro, ryabinin.a.a, glider, kvmarm,
Florian Fainelli, corbet, Abbott Liu, daniel.lezcano, linux,
kasan-dev, bcm-kernel-feedback-list, Andrey Ryabinin, keescook,
arnd, marc.zyngier, andre.przywara, philip, jinb.park7, tglx,
dvyukov, nico, gregkh, ard.biesheuvel, linux-doc, geert, rob,
pombredanne, akpm, thgarnie, kirill.shutemov
From: Abbott Liu <liuwenliang@huawei.com>
Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for arm
kernel address sanitizer.
+----+ 0xffffffff
| |
| |
| |
+----+ CONFIG_PAGE_OFFSET
| | | | |-> module virtual address space area.
| |/
+----+ MODULE_VADDR = KASAN_SHADOW_END
| | | | |-> the shadow area of kernel virtual address.
| |/
+----+ TASK_SIZE(start of kernel space) = KASAN_SHADOW_START the
| |\ shadow address of MODULE_VADDR
| | ---------------------+
| | |
+ + KASAN_SHADOW_OFFSET |-> the user space area. Kernel address
| | | sanitizer do not use this space.
| | ---------------------+
| |/
------ 0
1)KASAN_SHADOW_OFFSET:
This value is used to map an address to the corresponding shadow
address by the following formula:
shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
2)KASAN_SHADOW_START
This value is the MODULE_VADDR's shadow address. It is the start
of kernel virtual space.
3)KASAN_SHADOW_END
This value is the 0x100000000's shadow address. It is the end of
kernel address sanitizer shadow area. It is also the start of the
module area.
When kasan is enabled, the definition of TASK_SIZE is not an 8-bit
rotated constant, so we need to modify the TASK_SIZE access code in the
*.s file.
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/include/asm/kasan_def.h | 63 ++++++++++++++++++++++++++++++++
arch/arm/include/asm/memory.h | 5 +++
arch/arm/kernel/entry-armv.S | 5 ++-
arch/arm/kernel/entry-common.S | 9 +++--
arch/arm/mm/mmu.c | 7 +++-
5 files changed, 83 insertions(+), 6 deletions(-)
create mode 100644 arch/arm/include/asm/kasan_def.h
diff --git a/arch/arm/include/asm/kasan_def.h b/arch/arm/include/asm/kasan_def.h
new file mode 100644
index 000000000000..4f0aa9869f54
--- /dev/null
+++ b/arch/arm/include/asm/kasan_def.h
@@ -0,0 +1,63 @@
+/*
+ * arch/arm/include/asm/kasan_def.h
+ *
+ * Copyright (c) 2018 Huawei Technologies Co., Ltd.
+ *
+ * Author: Abbott Liu <liuwenliang@huawei.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __ASM_KASAN_DEF_H
+#define __ASM_KASAN_DEF_H
+
+#ifdef CONFIG_KASAN
+
+/*
+ * +----+ 0xffffffff
+ * | |
+ * | |
+ * | |
+ * +----+ CONFIG_PAGE_OFFSET
+ * | |\
+ * | | |-> module virtual address space area.
+ * | |/
+ * +----+ MODULE_VADDR = KASAN_SHADOW_END
+ * | |\
+ * | | |-> the shadow area of kernel virtual address.
+ * | |/
+ * +----+ TASK_SIZE(start of kernel space) = KASAN_SHADOW_START the
+ * | |\ shadow address of MODULE_VADDR
+ * | | ---------------------+
+ * | | |
+ * + + KASAN_SHADOW_OFFSET |-> the user space area. Kernel address
+ * | | | sanitizer do not use this space.
+ * | | ---------------------+
+ * | |/
+ * ------ 0
+ *
+ *1) KASAN_SHADOW_OFFSET:
+ * This value is used to map an address to the corresponding shadow address by
+ * the following formula: shadow_addr = (address >> KASAN_SHADOW_SCALE_SHIFT) +
+ * KASAN_SHADOW_OFFSET;
+ *
+ * 2) KASAN_SHADOW_START:
+ * This value is the MODULE_VADDR's shadow address. It is the start of kernel
+ * virtual space.
+ *
+ * 3) KASAN_SHADOW_END
+ * This value is the 0x100000000's shadow address. It is the end of kernel
+ * addresssanitizer's shadow area. It is also the start of the module area.
+ *
+ */
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#define KASAN_SHADOW_END ((UL(1) << (32 - KASAN_SHADOW_SCALE_SHIFT)) \
+ + KASAN_SHADOW_OFFSET)
+#define KASAN_SHADOW_START ((KASAN_SHADOW_END >> 3) + KASAN_SHADOW_OFFSET)
+
+#endif
+#endif
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 99035b5891ef..5cfa9e5dc733 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -18,6 +18,7 @@
#ifdef CONFIG_NEED_MACH_MEMORY_H
#include <mach/memory.h>
#endif
+#include <asm/kasan_def.h>
/* PAGE_OFFSET - the virtual address of the start of the kernel image */
#define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET)
@@ -28,7 +29,11 @@
* TASK_SIZE - the maximum size of a user space task.
* TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area
*/
+#ifndef CONFIG_KASAN
#define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M))
+#else
+#define TASK_SIZE (KASAN_SHADOW_START)
+#endif
#define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_16M)
/*
diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index 858d4e541532..8cf1cc30fa7f 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -180,7 +180,7 @@ ENDPROC(__und_invalid)
get_thread_info tsk
ldr r0, [tsk, #TI_ADDR_LIMIT]
- mov r1, #TASK_SIZE
+ ldr r1, =TASK_SIZE
str r1, [tsk, #TI_ADDR_LIMIT]
str r0, [sp, #SVC_ADDR_LIMIT]
@@ -434,7 +434,8 @@ ENDPROC(__fiq_abt)
@ if it was interrupted in a critical region. Here we
@ perform a quick test inline since it should be false
@ 99.9999% of the time. The rest is done out of line.
- cmp r4, #TASK_SIZE
+ ldr r0, =TASK_SIZE
+ cmp r4, r0
blhs kuser_cmpxchg64_fixup
#endif
#endif
diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
index 271cb8a1eba1..fee279e28a72 100644
--- a/arch/arm/kernel/entry-common.S
+++ b/arch/arm/kernel/entry-common.S
@@ -50,7 +50,8 @@ __ret_fast_syscall:
UNWIND(.cantunwind )
disable_irq_notrace @ disable interrupts
ldr r2, [tsk, #TI_ADDR_LIMIT]
- cmp r2, #TASK_SIZE
+ ldr r1, =TASK_SIZE
+ cmp r2, r1
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
@@ -87,7 +88,8 @@ __ret_fast_syscall:
#endif
disable_irq_notrace @ disable interrupts
ldr r2, [tsk, #TI_ADDR_LIMIT]
- cmp r2, #TASK_SIZE
+ ldr r1, =TASK_SIZE
+ cmp r2, r1
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
@@ -128,7 +130,8 @@ ret_slow_syscall:
disable_irq_notrace @ disable interrupts
ENTRY(ret_to_user_from_irq)
ldr r2, [tsk, #TI_ADDR_LIMIT]
- cmp r2, #TASK_SIZE
+ ldr r1, =TASK_SIZE
+ cmp r2, r1
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS]
tst r1, #_TIF_WORK_MASK
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 5d0d0f86e790..d05493ec7d7f 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1272,9 +1272,14 @@ static inline void prepare_page_table(void)
/*
* Clear out all the mappings below the kernel image.
*/
- for (addr = 0; addr < MODULES_VADDR; addr += PMD_SIZE)
+ for (addr = 0; addr < TASK_SIZE; addr += PMD_SIZE)
pmd_clear(pmd_off_k(addr));
+#ifdef CONFIG_KASAN
+ /*TASK_SIZE ~ MODULES_VADDR is the KASAN's shadow area -- skip over it*/
+ addr = MODULES_VADDR;
+#endif
+
#ifdef CONFIG_XIP_KERNEL
/* The XIP kernel is mapped in the module area -- skip over it */
addr = ((unsigned long)_exiprom + PMD_SIZE - 1) & PMD_MASK;
--
2.17.1
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 5/7] ARM: Define the virtual space of KASan's shadow region
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: mark.rutland, alexandre.belloni, mhocko, julien.thierry,
catalin.marinas, linux-kernel, dhowells, yamada.masahiro,
ryabinin.a.a, glider, kvmarm, Florian Fainelli, corbet,
Abbott Liu, daniel.lezcano, linux, kasan-dev,
bcm-kernel-feedback-list, Andrey Ryabinin, drjones,
vladimir.murzin, keescook, arnd, marc.zyngier, andre.przywara,
philip, jinb.park7, tglx, dvyukov, nico, gregkh, ard.biesheuvel,
linux-doc, christoffer.dall, geert, rob, pombredanne, akpm,
thgarnie, kirill.shutemov
From: Abbott Liu <liuwenliang@huawei.com>
Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for arm
kernel address sanitizer.
+----+ 0xffffffff
| |
| |
| |
+----+ CONFIG_PAGE_OFFSET
| | | | |-> module virtual address space area.
| |/
+----+ MODULE_VADDR = KASAN_SHADOW_END
| | | | |-> the shadow area of kernel virtual address.
| |/
+----+ TASK_SIZE(start of kernel space) = KASAN_SHADOW_START the
| |\ shadow address of MODULE_VADDR
| | ---------------------+
| | |
+ + KASAN_SHADOW_OFFSET |-> the user space area. Kernel address
| | | sanitizer do not use this space.
| | ---------------------+
| |/
------ 0
1)KASAN_SHADOW_OFFSET:
This value is used to map an address to the corresponding shadow
address by the following formula:
shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
2)KASAN_SHADOW_START
This value is the MODULE_VADDR's shadow address. It is the start
of kernel virtual space.
3)KASAN_SHADOW_END
This value is the 0x100000000's shadow address. It is the end of
kernel address sanitizer shadow area. It is also the start of the
module area.
When kasan is enabled, the definition of TASK_SIZE is not an 8-bit
rotated constant, so we need to modify the TASK_SIZE access code in the
*.s file.
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/include/asm/kasan_def.h | 63 ++++++++++++++++++++++++++++++++
arch/arm/include/asm/memory.h | 5 +++
arch/arm/kernel/entry-armv.S | 5 ++-
arch/arm/kernel/entry-common.S | 9 +++--
arch/arm/mm/mmu.c | 7 +++-
5 files changed, 83 insertions(+), 6 deletions(-)
create mode 100644 arch/arm/include/asm/kasan_def.h
diff --git a/arch/arm/include/asm/kasan_def.h b/arch/arm/include/asm/kasan_def.h
new file mode 100644
index 000000000000..4f0aa9869f54
--- /dev/null
+++ b/arch/arm/include/asm/kasan_def.h
@@ -0,0 +1,63 @@
+/*
+ * arch/arm/include/asm/kasan_def.h
+ *
+ * Copyright (c) 2018 Huawei Technologies Co., Ltd.
+ *
+ * Author: Abbott Liu <liuwenliang@huawei.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __ASM_KASAN_DEF_H
+#define __ASM_KASAN_DEF_H
+
+#ifdef CONFIG_KASAN
+
+/*
+ * +----+ 0xffffffff
+ * | |
+ * | |
+ * | |
+ * +----+ CONFIG_PAGE_OFFSET
+ * | |\
+ * | | |-> module virtual address space area.
+ * | |/
+ * +----+ MODULE_VADDR = KASAN_SHADOW_END
+ * | |\
+ * | | |-> the shadow area of kernel virtual address.
+ * | |/
+ * +----+ TASK_SIZE(start of kernel space) = KASAN_SHADOW_START the
+ * | |\ shadow address of MODULE_VADDR
+ * | | ---------------------+
+ * | | |
+ * + + KASAN_SHADOW_OFFSET |-> the user space area. Kernel address
+ * | | | sanitizer do not use this space.
+ * | | ---------------------+
+ * | |/
+ * ------ 0
+ *
+ *1) KASAN_SHADOW_OFFSET:
+ * This value is used to map an address to the corresponding shadow address by
+ * the following formula: shadow_addr = (address >> KASAN_SHADOW_SCALE_SHIFT) +
+ * KASAN_SHADOW_OFFSET;
+ *
+ * 2) KASAN_SHADOW_START:
+ * This value is the MODULE_VADDR's shadow address. It is the start of kernel
+ * virtual space.
+ *
+ * 3) KASAN_SHADOW_END
+ * This value is the 0x100000000's shadow address. It is the end of kernel
+ * addresssanitizer's shadow area. It is also the start of the module area.
+ *
+ */
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#define KASAN_SHADOW_END ((UL(1) << (32 - KASAN_SHADOW_SCALE_SHIFT)) \
+ + KASAN_SHADOW_OFFSET)
+#define KASAN_SHADOW_START ((KASAN_SHADOW_END >> 3) + KASAN_SHADOW_OFFSET)
+
+#endif
+#endif
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 99035b5891ef..5cfa9e5dc733 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -18,6 +18,7 @@
#ifdef CONFIG_NEED_MACH_MEMORY_H
#include <mach/memory.h>
#endif
+#include <asm/kasan_def.h>
/* PAGE_OFFSET - the virtual address of the start of the kernel image */
#define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET)
@@ -28,7 +29,11 @@
* TASK_SIZE - the maximum size of a user space task.
* TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area
*/
+#ifndef CONFIG_KASAN
#define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M))
+#else
+#define TASK_SIZE (KASAN_SHADOW_START)
+#endif
#define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_16M)
/*
diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index 858d4e541532..8cf1cc30fa7f 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -180,7 +180,7 @@ ENDPROC(__und_invalid)
get_thread_info tsk
ldr r0, [tsk, #TI_ADDR_LIMIT]
- mov r1, #TASK_SIZE
+ ldr r1, =TASK_SIZE
str r1, [tsk, #TI_ADDR_LIMIT]
str r0, [sp, #SVC_ADDR_LIMIT]
@@ -434,7 +434,8 @@ ENDPROC(__fiq_abt)
@ if it was interrupted in a critical region. Here we
@ perform a quick test inline since it should be false
@ 99.9999% of the time. The rest is done out of line.
- cmp r4, #TASK_SIZE
+ ldr r0, =TASK_SIZE
+ cmp r4, r0
blhs kuser_cmpxchg64_fixup
#endif
#endif
diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
index 271cb8a1eba1..fee279e28a72 100644
--- a/arch/arm/kernel/entry-common.S
+++ b/arch/arm/kernel/entry-common.S
@@ -50,7 +50,8 @@ __ret_fast_syscall:
UNWIND(.cantunwind )
disable_irq_notrace @ disable interrupts
ldr r2, [tsk, #TI_ADDR_LIMIT]
- cmp r2, #TASK_SIZE
+ ldr r1, =TASK_SIZE
+ cmp r2, r1
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
@@ -87,7 +88,8 @@ __ret_fast_syscall:
#endif
disable_irq_notrace @ disable interrupts
ldr r2, [tsk, #TI_ADDR_LIMIT]
- cmp r2, #TASK_SIZE
+ ldr r1, =TASK_SIZE
+ cmp r2, r1
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
@@ -128,7 +130,8 @@ ret_slow_syscall:
disable_irq_notrace @ disable interrupts
ENTRY(ret_to_user_from_irq)
ldr r2, [tsk, #TI_ADDR_LIMIT]
- cmp r2, #TASK_SIZE
+ ldr r1, =TASK_SIZE
+ cmp r2, r1
blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS]
tst r1, #_TIF_WORK_MASK
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 5d0d0f86e790..d05493ec7d7f 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1272,9 +1272,14 @@ static inline void prepare_page_table(void)
/*
* Clear out all the mappings below the kernel image.
*/
- for (addr = 0; addr < MODULES_VADDR; addr += PMD_SIZE)
+ for (addr = 0; addr < TASK_SIZE; addr += PMD_SIZE)
pmd_clear(pmd_off_k(addr));
+#ifdef CONFIG_KASAN
+ /*TASK_SIZE ~ MODULES_VADDR is the KASAN's shadow area -- skip over it*/
+ addr = MODULES_VADDR;
+#endif
+
#ifdef CONFIG_XIP_KERNEL
/* The XIP kernel is mapped in the module area -- skip over it */
addr = ((unsigned long)_exiprom + PMD_SIZE - 1) & PMD_MASK;
--
2.17.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 6/7] ARM: Initialize the mapping of KASan shadow memory
2020-01-17 22:48 ` Florian Fainelli
(?)
@ 2020-01-17 22:48 ` Florian Fainelli
-1 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Andrey Ryabinin, Abbott Liu, Florian Fainelli,
bcm-kernel-feedback-list, glider, dvyukov, corbet, linux,
christoffer.dall, marc.zyngier, arnd, nico, vladimir.murzin,
keescook, jinb.park7, alexandre.belloni, ard.biesheuvel,
daniel.lezcano, pombredanne, rob, gregkh, akpm, mark.rutland,
catalin.marinas, yamada.masahiro, tglx, thgarnie, dhowells,
geert, andre.przywara, julien.thierry, drjones, philip, mhocko,
kirill.shutemov, kasan-dev, linux-doc, linux-kernel, kvmarm,
ryabinin.a.a
From: Andrey Ryabinin <aryabinin@virtuozzo.com>
This patch initializes KASan shadow region's page table and memory.
There are two stage for KASan initializing:
1. At early boot stage the whole shadow region is mapped to just
one physical page (kasan_zero_page). It is finished by the function
kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
head-common.S)
---Andrey Ryabinin <aryabinin@virtuozzo.com>
2. After the calling of paging_init, we use kasan_zero_page as zero
shadow for some memory that KASan does not need to track, and we
allocate a new shadow space for the other memory that KASan need to
track. These issues are finished by the function kasan_init which is
call by setup_arch.
---Andrey Ryabinin <aryabinin@virtuozzo.com>
3. Add support ARM LPAE
If LPAE is enabled, KASan shadow region's mapping table need be copied
in the pgd_alloc() function.
---Abbott Liu <liuwenliang@huawei.com>
4. Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate,
kasan_pgd_populate from .meminit.text section to .init.text section.
---Reported by: Florian Fainelli <f.fainelli@gmail.com>
---Signed off by: Abbott Liu <liuwenliang@huawei.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Co-Developed-by: Abbott Liu <liuwenliang@huawei.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reported-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/include/asm/kasan.h | 35 ++++
arch/arm/include/asm/pgalloc.h | 9 +-
arch/arm/include/asm/thread_info.h | 4 +
arch/arm/kernel/head-common.S | 3 +
arch/arm/kernel/setup.c | 2 +
arch/arm/mm/Makefile | 3 +
arch/arm/mm/kasan_init.c | 302 +++++++++++++++++++++++++++++
arch/arm/mm/pgd.c | 14 ++
8 files changed, 370 insertions(+), 2 deletions(-)
create mode 100644 arch/arm/include/asm/kasan.h
create mode 100644 arch/arm/mm/kasan_init.c
diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
new file mode 100644
index 000000000000..1801f4d30993
--- /dev/null
+++ b/arch/arm/include/asm/kasan.h
@@ -0,0 +1,35 @@
+/*
+ * arch/arm/include/asm/kasan.h
+ *
+ * Copyright (c) 2015 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#ifndef __ASM_KASAN_H
+#define __ASM_KASAN_H
+
+#ifdef CONFIG_KASAN
+
+#include <asm/kasan_def.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+
+/*
+ * Compiler uses shadow offset assuming that addresses start
+ * from 0. Kernel addresses don't start from 0, so shadow
+ * for kernel really starts from 'compiler's shadow offset' +
+ * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
+ */
+
+extern void kasan_init(void);
+
+#else
+static inline void kasan_init(void) { }
+#endif
+
+#endif
diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
index 069da393110c..d969f8058b26 100644
--- a/arch/arm/include/asm/pgalloc.h
+++ b/arch/arm/include/asm/pgalloc.h
@@ -21,6 +21,7 @@
#define _PAGE_KERNEL_TABLE (PMD_TYPE_TABLE | PMD_BIT4 | PMD_DOMAIN(DOMAIN_KERNEL))
#ifdef CONFIG_ARM_LPAE
+#define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t))
static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
{
@@ -39,14 +40,18 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
}
#else /* !CONFIG_ARM_LPAE */
+#define PGD_SIZE (PAGE_SIZE << 2)
/*
* Since we have only two-level page tables, these are trivial
*/
#define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
#define pmd_free(mm, pmd) do { } while (0)
-#define pud_populate(mm,pmd,pte) BUG()
-
+#ifndef CONFIG_KASAN
+#define pud_populate(mm, pmd, pte) BUG()
+#else
+#define pud_populate(mm, pmd, pte) do { } while (0)
+#endif
#endif /* CONFIG_ARM_LPAE */
extern pgd_t *pgd_alloc(struct mm_struct *mm);
diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
index 0d0d5178e2c3..2c940dcc953b 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -13,7 +13,11 @@
#include <asm/fpstate.h>
#include <asm/page.h>
+#ifdef CONFIG_KASAN
+#define THREAD_SIZE_ORDER 2
+#else
#define THREAD_SIZE_ORDER 1
+#endif
#define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
#define THREAD_START_SP (THREAD_SIZE - 8)
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
index 6840c7c60a85..89c80154b9ef 100644
--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -111,6 +111,9 @@ __mmap_switched:
str r8, [r2] @ Save atags pointer
cmp r3, #0
strne r10, [r3] @ Save control register values
+#ifdef CONFIG_KASAN
+ bl kasan_early_init
+#endif
mov lr, #0
b start_kernel
ENDPROC(__mmap_switched)
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index d0a464e317ea..b120df6325dc 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -58,6 +58,7 @@
#include <asm/unwind.h>
#include <asm/memblock.h>
#include <asm/virt.h>
+#include <asm/kasan.h>
#include "atags.h"
@@ -1130,6 +1131,7 @@ void __init setup_arch(char **cmdline_p)
early_ioremap_reset();
paging_init(mdesc);
+ kasan_init();
request_standard_resources(mdesc);
if (mdesc->restart)
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 432302911d6e..1c937135c9c4 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -112,3 +112,6 @@ obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o
obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o
obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
+
+KASAN_SANITIZE_kasan_init.o := n
+obj-$(CONFIG_KASAN) += kasan_init.o
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
new file mode 100644
index 000000000000..7597efb36cb0
--- /dev/null
+++ b/arch/arm/mm/kasan_init.c
@@ -0,0 +1,302 @@
+/*
+ * This file contains kasan initialization code for ARM.
+ *
+ * Copyright (c) 2018 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan: " fmt
+#include <linux/kasan.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/sched/task.h>
+#include <linux/start_kernel.h>
+#include <asm/cputype.h>
+#include <asm/highmem.h>
+#include <asm/mach/map.h>
+#include <asm/memory.h>
+#include <asm/page.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/procinfo.h>
+#include <asm/proc-fns.h>
+#include <asm/tlbflush.h>
+#include <asm/cp15.h>
+
+#include "mm.h"
+
+static pgd_t tmp_pgd_table[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE);
+
+pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
+
+static __init void *kasan_alloc_block(size_t size, int node)
+{
+ return memblock_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
+ MEMBLOCK_ALLOC_KASAN, node);
+}
+
+static void __init kasan_early_pmd_populate(unsigned long start,
+ unsigned long end, pud_t *pud)
+{
+ unsigned long addr;
+ unsigned long next;
+ pmd_t *pmd;
+
+ pmd = pmd_offset(pud, start);
+ for (addr = start; addr < end;) {
+ pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
+ next = pmd_addr_end(addr, end);
+ addr = next;
+ flush_pmd_entry(pmd);
+ pmd++;
+ }
+}
+
+static void __init kasan_early_pud_populate(unsigned long start,
+ unsigned long end, pgd_t *pgd)
+{
+ unsigned long addr;
+ unsigned long next;
+ pud_t *pud;
+
+ pud = pud_offset(pgd, start);
+ for (addr = start; addr < end;) {
+ next = pud_addr_end(addr, end);
+ kasan_early_pmd_populate(addr, next, pud);
+ addr = next;
+ pud++;
+ }
+}
+
+void __init kasan_map_early_shadow(pgd_t *pgdp)
+{
+ int i;
+ unsigned long start = KASAN_SHADOW_START;
+ unsigned long end = KASAN_SHADOW_END;
+ unsigned long addr;
+ unsigned long next;
+ pgd_t *pgd;
+
+ for (i = 0; i < PTRS_PER_PTE; i++)
+ set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
+ &kasan_early_shadow_pte[i], pfn_pte(
+ virt_to_pfn(kasan_early_shadow_page),
+ __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY
+ | L_PTE_XN)));
+
+ pgd = pgd_offset_k(start);
+ for (addr = start; addr < end;) {
+ next = pgd_addr_end(addr, end);
+ kasan_early_pud_populate(addr, next, pgd);
+ addr = next;
+ pgd++;
+ }
+}
+
+extern struct proc_info_list *lookup_processor_type(unsigned int);
+
+void __init kasan_early_init(void)
+{
+ struct proc_info_list *list;
+
+ /*
+ * locate processor in the list of supported processor
+ * types. The linker builds this table for us from the
+ * entries in arch/arm/mm/proc-*.S
+ */
+ list = lookup_processor_type(read_cpuid_id());
+ if (list) {
+#ifdef MULTI_CPU
+ processor = *list->proc;
+#endif
+ }
+
+ BUILD_BUG_ON((KASAN_SHADOW_END - (1UL << 29)) != KASAN_SHADOW_OFFSET);
+ kasan_map_early_shadow(swapper_pg_dir);
+}
+
+static void __init clear_pgds(unsigned long start,
+ unsigned long end)
+{
+ for (; start && start < end; start += PMD_SIZE)
+ pmd_clear(pmd_off_k(start));
+}
+
+pte_t * __init kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node)
+{
+ pte_t *pte = pte_offset_kernel(pmd, addr);
+
+ if (pte_none(*pte)) {
+ pte_t entry;
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+ if (!p)
+ return NULL;
+ entry = pfn_pte(virt_to_pfn(p),
+ __pgprot(pgprot_val(PAGE_KERNEL)));
+ set_pte_at(&init_mm, addr, pte, entry);
+ }
+ return pte;
+}
+
+pmd_t * __init kasan_pmd_populate(pud_t *pud, unsigned long addr, int node)
+{
+ pmd_t *pmd = pmd_offset(pud, addr);
+
+ if (pmd_none(*pmd)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+ if (!p)
+ return NULL;
+ pmd_populate_kernel(&init_mm, pmd, p);
+ }
+ return pmd;
+}
+
+pud_t * __init kasan_pud_populate(pgd_t *pgd, unsigned long addr, int node)
+{
+ pud_t *pud = pud_offset(pgd, addr);
+
+ if (pud_none(*pud)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+ if (!p)
+ return NULL;
+ pr_err("populating pud addr %lx\n", addr);
+ pud_populate(&init_mm, pud, p);
+ }
+ return pud;
+}
+
+pgd_t * __init kasan_pgd_populate(unsigned long addr, int node)
+{
+ pgd_t *pgd = pgd_offset_k(addr);
+
+ if (pgd_none(*pgd)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+ if (!p)
+ return NULL;
+ pgd_populate(&init_mm, pgd, p);
+ }
+ return pgd;
+}
+
+static int __init create_mapping(unsigned long start, unsigned long end,
+ int node)
+{
+ unsigned long addr = start;
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ pr_info("populating shadow for %lx, %lx\n", start, end);
+
+ for (; addr < end; addr += PAGE_SIZE) {
+ pgd = kasan_pgd_populate(addr, node);
+ if (!pgd)
+ return -ENOMEM;
+
+ pud = kasan_pud_populate(pgd, addr, node);
+ if (!pud)
+ return -ENOMEM;
+
+ pmd = kasan_pmd_populate(pud, addr, node);
+ if (!pmd)
+ return -ENOMEM;
+
+ pte = kasan_pte_populate(pmd, addr, node);
+ if (!pte)
+ return -ENOMEM;
+ }
+ return 0;
+}
+
+
+void __init kasan_init(void)
+{
+ struct memblock_region *reg;
+ u64 orig_ttbr0;
+ int i;
+
+ /*
+ * We are going to perform proper setup of shadow memory.
+ * At first we should unmap early shadow (clear_pgds() call bellow).
+ * However, instrumented code couldn't execute without shadow memory.
+ * tmp_pgd_table and tmp_pmd_table used to keep early shadow mapped
+ * until full shadow setup will be finished.
+ */
+ orig_ttbr0 = get_ttbr0();
+
+#ifdef CONFIG_ARM_LPAE
+ memcpy(tmp_pmd_table,
+ pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)),
+ sizeof(tmp_pmd_table));
+ memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table));
+ set_pgd(&tmp_pgd_table[pgd_index(KASAN_SHADOW_START)],
+ __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
+ set_ttbr0(__pa(tmp_pgd_table));
+#else
+ memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table));
+ set_ttbr0((u64)__pa(tmp_pgd_table));
+#endif
+ flush_cache_all();
+ local_flush_bp_all();
+ local_flush_tlb_all();
+
+ clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+ kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
+ kasan_mem_to_shadow((void *)-1UL) + 1);
+
+ for_each_memblock(memory, reg) {
+ void *start = __va(reg->base);
+ void *end = __va(reg->base + reg->size);
+
+ if (reg->base + reg->size > arm_lowmem_limit)
+ end = __va(arm_lowmem_limit);
+ if (start >= end)
+ break;
+
+ create_mapping((unsigned long)kasan_mem_to_shadow(start),
+ (unsigned long)kasan_mem_to_shadow(end),
+ NUMA_NO_NODE);
+ }
+
+ /*1.the module's global variable is in MODULES_VADDR ~ MODULES_END,
+ * so we need mapping.
+ *2.PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR
+ * ~ MODULES_END's shadow is in the same PMD_SIZE, so we cant
+ * use kasan_populate_zero_shadow.
+ */
+ create_mapping(
+ (unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
+
+ (unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE +
+ PMD_SIZE)),
+ NUMA_NO_NODE);
+
+ /*
+ * KAsan may reuse the contents of kasan_early_shadow_pte directly, so
+ * we should make sure that it maps the zero page read-only.
+ */
+ for (i = 0; i < PTRS_PER_PTE; i++)
+ set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
+ &kasan_early_shadow_pte[i],
+ pfn_pte(virt_to_pfn(kasan_early_shadow_page),
+ __pgprot(pgprot_val(PAGE_KERNEL)
+ | L_PTE_RDONLY)));
+ memset(kasan_early_shadow_page, 0, PAGE_SIZE);
+ set_ttbr0(orig_ttbr0);
+ flush_cache_all();
+ local_flush_bp_all();
+ local_flush_tlb_all();
+ pr_info("Kernel address sanitizer initialized\n");
+ init_task.kasan_depth = 0;
+}
diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c
index 478bd2c6aa50..92a408262df2 100644
--- a/arch/arm/mm/pgd.c
+++ b/arch/arm/mm/pgd.c
@@ -61,6 +61,20 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
new_pmd = pmd_alloc(mm, new_pud, 0);
if (!new_pmd)
goto no_pmd;
+#ifdef CONFIG_KASAN
+ /*
+ *Copy PMD table for KASAN shadow mappings.
+ */
+ init_pgd = pgd_offset_k(TASK_SIZE);
+ init_pud = pud_offset(init_pgd, TASK_SIZE);
+ init_pmd = pmd_offset(init_pud, TASK_SIZE);
+ new_pmd = pmd_offset(new_pud, TASK_SIZE);
+ memcpy(new_pmd, init_pmd,
+ (pmd_index(MODULES_VADDR)-pmd_index(TASK_SIZE))
+ * sizeof(pmd_t));
+ clean_dcache_area(new_pmd, PTRS_PER_PMD*sizeof(pmd_t));
+#endif
+
#endif
if (!vectors_high()) {
--
2.17.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 6/7] ARM: Initialize the mapping of KASan shadow memory
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: alexandre.belloni, mhocko, catalin.marinas, linux-kernel,
dhowells, yamada.masahiro, ryabinin.a.a, glider, kvmarm,
Florian Fainelli, corbet, Abbott Liu, daniel.lezcano, linux,
kasan-dev, bcm-kernel-feedback-list, Andrey Ryabinin, keescook,
arnd, marc.zyngier, andre.przywara, philip, jinb.park7, tglx,
dvyukov, nico, gregkh, ard.biesheuvel, linux-doc, geert, rob,
pombredanne, akpm, thgarnie, kirill.shutemov
From: Andrey Ryabinin <aryabinin@virtuozzo.com>
This patch initializes KASan shadow region's page table and memory.
There are two stage for KASan initializing:
1. At early boot stage the whole shadow region is mapped to just
one physical page (kasan_zero_page). It is finished by the function
kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
head-common.S)
---Andrey Ryabinin <aryabinin@virtuozzo.com>
2. After the calling of paging_init, we use kasan_zero_page as zero
shadow for some memory that KASan does not need to track, and we
allocate a new shadow space for the other memory that KASan need to
track. These issues are finished by the function kasan_init which is
call by setup_arch.
---Andrey Ryabinin <aryabinin@virtuozzo.com>
3. Add support ARM LPAE
If LPAE is enabled, KASan shadow region's mapping table need be copied
in the pgd_alloc() function.
---Abbott Liu <liuwenliang@huawei.com>
4. Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate,
kasan_pgd_populate from .meminit.text section to .init.text section.
---Reported by: Florian Fainelli <f.fainelli@gmail.com>
---Signed off by: Abbott Liu <liuwenliang@huawei.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Co-Developed-by: Abbott Liu <liuwenliang@huawei.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reported-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/include/asm/kasan.h | 35 ++++
arch/arm/include/asm/pgalloc.h | 9 +-
arch/arm/include/asm/thread_info.h | 4 +
arch/arm/kernel/head-common.S | 3 +
arch/arm/kernel/setup.c | 2 +
arch/arm/mm/Makefile | 3 +
arch/arm/mm/kasan_init.c | 302 +++++++++++++++++++++++++++++
arch/arm/mm/pgd.c | 14 ++
8 files changed, 370 insertions(+), 2 deletions(-)
create mode 100644 arch/arm/include/asm/kasan.h
create mode 100644 arch/arm/mm/kasan_init.c
diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
new file mode 100644
index 000000000000..1801f4d30993
--- /dev/null
+++ b/arch/arm/include/asm/kasan.h
@@ -0,0 +1,35 @@
+/*
+ * arch/arm/include/asm/kasan.h
+ *
+ * Copyright (c) 2015 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#ifndef __ASM_KASAN_H
+#define __ASM_KASAN_H
+
+#ifdef CONFIG_KASAN
+
+#include <asm/kasan_def.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+
+/*
+ * Compiler uses shadow offset assuming that addresses start
+ * from 0. Kernel addresses don't start from 0, so shadow
+ * for kernel really starts from 'compiler's shadow offset' +
+ * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
+ */
+
+extern void kasan_init(void);
+
+#else
+static inline void kasan_init(void) { }
+#endif
+
+#endif
diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
index 069da393110c..d969f8058b26 100644
--- a/arch/arm/include/asm/pgalloc.h
+++ b/arch/arm/include/asm/pgalloc.h
@@ -21,6 +21,7 @@
#define _PAGE_KERNEL_TABLE (PMD_TYPE_TABLE | PMD_BIT4 | PMD_DOMAIN(DOMAIN_KERNEL))
#ifdef CONFIG_ARM_LPAE
+#define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t))
static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
{
@@ -39,14 +40,18 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
}
#else /* !CONFIG_ARM_LPAE */
+#define PGD_SIZE (PAGE_SIZE << 2)
/*
* Since we have only two-level page tables, these are trivial
*/
#define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
#define pmd_free(mm, pmd) do { } while (0)
-#define pud_populate(mm,pmd,pte) BUG()
-
+#ifndef CONFIG_KASAN
+#define pud_populate(mm, pmd, pte) BUG()
+#else
+#define pud_populate(mm, pmd, pte) do { } while (0)
+#endif
#endif /* CONFIG_ARM_LPAE */
extern pgd_t *pgd_alloc(struct mm_struct *mm);
diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
index 0d0d5178e2c3..2c940dcc953b 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -13,7 +13,11 @@
#include <asm/fpstate.h>
#include <asm/page.h>
+#ifdef CONFIG_KASAN
+#define THREAD_SIZE_ORDER 2
+#else
#define THREAD_SIZE_ORDER 1
+#endif
#define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
#define THREAD_START_SP (THREAD_SIZE - 8)
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
index 6840c7c60a85..89c80154b9ef 100644
--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -111,6 +111,9 @@ __mmap_switched:
str r8, [r2] @ Save atags pointer
cmp r3, #0
strne r10, [r3] @ Save control register values
+#ifdef CONFIG_KASAN
+ bl kasan_early_init
+#endif
mov lr, #0
b start_kernel
ENDPROC(__mmap_switched)
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index d0a464e317ea..b120df6325dc 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -58,6 +58,7 @@
#include <asm/unwind.h>
#include <asm/memblock.h>
#include <asm/virt.h>
+#include <asm/kasan.h>
#include "atags.h"
@@ -1130,6 +1131,7 @@ void __init setup_arch(char **cmdline_p)
early_ioremap_reset();
paging_init(mdesc);
+ kasan_init();
request_standard_resources(mdesc);
if (mdesc->restart)
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 432302911d6e..1c937135c9c4 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -112,3 +112,6 @@ obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o
obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o
obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
+
+KASAN_SANITIZE_kasan_init.o := n
+obj-$(CONFIG_KASAN) += kasan_init.o
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
new file mode 100644
index 000000000000..7597efb36cb0
--- /dev/null
+++ b/arch/arm/mm/kasan_init.c
@@ -0,0 +1,302 @@
+/*
+ * This file contains kasan initialization code for ARM.
+ *
+ * Copyright (c) 2018 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan: " fmt
+#include <linux/kasan.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/sched/task.h>
+#include <linux/start_kernel.h>
+#include <asm/cputype.h>
+#include <asm/highmem.h>
+#include <asm/mach/map.h>
+#include <asm/memory.h>
+#include <asm/page.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/procinfo.h>
+#include <asm/proc-fns.h>
+#include <asm/tlbflush.h>
+#include <asm/cp15.h>
+
+#include "mm.h"
+
+static pgd_t tmp_pgd_table[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE);
+
+pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
+
+static __init void *kasan_alloc_block(size_t size, int node)
+{
+ return memblock_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
+ MEMBLOCK_ALLOC_KASAN, node);
+}
+
+static void __init kasan_early_pmd_populate(unsigned long start,
+ unsigned long end, pud_t *pud)
+{
+ unsigned long addr;
+ unsigned long next;
+ pmd_t *pmd;
+
+ pmd = pmd_offset(pud, start);
+ for (addr = start; addr < end;) {
+ pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
+ next = pmd_addr_end(addr, end);
+ addr = next;
+ flush_pmd_entry(pmd);
+ pmd++;
+ }
+}
+
+static void __init kasan_early_pud_populate(unsigned long start,
+ unsigned long end, pgd_t *pgd)
+{
+ unsigned long addr;
+ unsigned long next;
+ pud_t *pud;
+
+ pud = pud_offset(pgd, start);
+ for (addr = start; addr < end;) {
+ next = pud_addr_end(addr, end);
+ kasan_early_pmd_populate(addr, next, pud);
+ addr = next;
+ pud++;
+ }
+}
+
+void __init kasan_map_early_shadow(pgd_t *pgdp)
+{
+ int i;
+ unsigned long start = KASAN_SHADOW_START;
+ unsigned long end = KASAN_SHADOW_END;
+ unsigned long addr;
+ unsigned long next;
+ pgd_t *pgd;
+
+ for (i = 0; i < PTRS_PER_PTE; i++)
+ set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
+ &kasan_early_shadow_pte[i], pfn_pte(
+ virt_to_pfn(kasan_early_shadow_page),
+ __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY
+ | L_PTE_XN)));
+
+ pgd = pgd_offset_k(start);
+ for (addr = start; addr < end;) {
+ next = pgd_addr_end(addr, end);
+ kasan_early_pud_populate(addr, next, pgd);
+ addr = next;
+ pgd++;
+ }
+}
+
+extern struct proc_info_list *lookup_processor_type(unsigned int);
+
+void __init kasan_early_init(void)
+{
+ struct proc_info_list *list;
+
+ /*
+ * locate processor in the list of supported processor
+ * types. The linker builds this table for us from the
+ * entries in arch/arm/mm/proc-*.S
+ */
+ list = lookup_processor_type(read_cpuid_id());
+ if (list) {
+#ifdef MULTI_CPU
+ processor = *list->proc;
+#endif
+ }
+
+ BUILD_BUG_ON((KASAN_SHADOW_END - (1UL << 29)) != KASAN_SHADOW_OFFSET);
+ kasan_map_early_shadow(swapper_pg_dir);
+}
+
+static void __init clear_pgds(unsigned long start,
+ unsigned long end)
+{
+ for (; start && start < end; start += PMD_SIZE)
+ pmd_clear(pmd_off_k(start));
+}
+
+pte_t * __init kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node)
+{
+ pte_t *pte = pte_offset_kernel(pmd, addr);
+
+ if (pte_none(*pte)) {
+ pte_t entry;
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+ if (!p)
+ return NULL;
+ entry = pfn_pte(virt_to_pfn(p),
+ __pgprot(pgprot_val(PAGE_KERNEL)));
+ set_pte_at(&init_mm, addr, pte, entry);
+ }
+ return pte;
+}
+
+pmd_t * __init kasan_pmd_populate(pud_t *pud, unsigned long addr, int node)
+{
+ pmd_t *pmd = pmd_offset(pud, addr);
+
+ if (pmd_none(*pmd)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+ if (!p)
+ return NULL;
+ pmd_populate_kernel(&init_mm, pmd, p);
+ }
+ return pmd;
+}
+
+pud_t * __init kasan_pud_populate(pgd_t *pgd, unsigned long addr, int node)
+{
+ pud_t *pud = pud_offset(pgd, addr);
+
+ if (pud_none(*pud)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+ if (!p)
+ return NULL;
+ pr_err("populating pud addr %lx\n", addr);
+ pud_populate(&init_mm, pud, p);
+ }
+ return pud;
+}
+
+pgd_t * __init kasan_pgd_populate(unsigned long addr, int node)
+{
+ pgd_t *pgd = pgd_offset_k(addr);
+
+ if (pgd_none(*pgd)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+ if (!p)
+ return NULL;
+ pgd_populate(&init_mm, pgd, p);
+ }
+ return pgd;
+}
+
+static int __init create_mapping(unsigned long start, unsigned long end,
+ int node)
+{
+ unsigned long addr = start;
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ pr_info("populating shadow for %lx, %lx\n", start, end);
+
+ for (; addr < end; addr += PAGE_SIZE) {
+ pgd = kasan_pgd_populate(addr, node);
+ if (!pgd)
+ return -ENOMEM;
+
+ pud = kasan_pud_populate(pgd, addr, node);
+ if (!pud)
+ return -ENOMEM;
+
+ pmd = kasan_pmd_populate(pud, addr, node);
+ if (!pmd)
+ return -ENOMEM;
+
+ pte = kasan_pte_populate(pmd, addr, node);
+ if (!pte)
+ return -ENOMEM;
+ }
+ return 0;
+}
+
+
+void __init kasan_init(void)
+{
+ struct memblock_region *reg;
+ u64 orig_ttbr0;
+ int i;
+
+ /*
+ * We are going to perform proper setup of shadow memory.
+ * At first we should unmap early shadow (clear_pgds() call bellow).
+ * However, instrumented code couldn't execute without shadow memory.
+ * tmp_pgd_table and tmp_pmd_table used to keep early shadow mapped
+ * until full shadow setup will be finished.
+ */
+ orig_ttbr0 = get_ttbr0();
+
+#ifdef CONFIG_ARM_LPAE
+ memcpy(tmp_pmd_table,
+ pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)),
+ sizeof(tmp_pmd_table));
+ memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table));
+ set_pgd(&tmp_pgd_table[pgd_index(KASAN_SHADOW_START)],
+ __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
+ set_ttbr0(__pa(tmp_pgd_table));
+#else
+ memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table));
+ set_ttbr0((u64)__pa(tmp_pgd_table));
+#endif
+ flush_cache_all();
+ local_flush_bp_all();
+ local_flush_tlb_all();
+
+ clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+ kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
+ kasan_mem_to_shadow((void *)-1UL) + 1);
+
+ for_each_memblock(memory, reg) {
+ void *start = __va(reg->base);
+ void *end = __va(reg->base + reg->size);
+
+ if (reg->base + reg->size > arm_lowmem_limit)
+ end = __va(arm_lowmem_limit);
+ if (start >= end)
+ break;
+
+ create_mapping((unsigned long)kasan_mem_to_shadow(start),
+ (unsigned long)kasan_mem_to_shadow(end),
+ NUMA_NO_NODE);
+ }
+
+ /*1.the module's global variable is in MODULES_VADDR ~ MODULES_END,
+ * so we need mapping.
+ *2.PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR
+ * ~ MODULES_END's shadow is in the same PMD_SIZE, so we cant
+ * use kasan_populate_zero_shadow.
+ */
+ create_mapping(
+ (unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
+
+ (unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE +
+ PMD_SIZE)),
+ NUMA_NO_NODE);
+
+ /*
+ * KAsan may reuse the contents of kasan_early_shadow_pte directly, so
+ * we should make sure that it maps the zero page read-only.
+ */
+ for (i = 0; i < PTRS_PER_PTE; i++)
+ set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
+ &kasan_early_shadow_pte[i],
+ pfn_pte(virt_to_pfn(kasan_early_shadow_page),
+ __pgprot(pgprot_val(PAGE_KERNEL)
+ | L_PTE_RDONLY)));
+ memset(kasan_early_shadow_page, 0, PAGE_SIZE);
+ set_ttbr0(orig_ttbr0);
+ flush_cache_all();
+ local_flush_bp_all();
+ local_flush_tlb_all();
+ pr_info("Kernel address sanitizer initialized\n");
+ init_task.kasan_depth = 0;
+}
diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c
index 478bd2c6aa50..92a408262df2 100644
--- a/arch/arm/mm/pgd.c
+++ b/arch/arm/mm/pgd.c
@@ -61,6 +61,20 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
new_pmd = pmd_alloc(mm, new_pud, 0);
if (!new_pmd)
goto no_pmd;
+#ifdef CONFIG_KASAN
+ /*
+ *Copy PMD table for KASAN shadow mappings.
+ */
+ init_pgd = pgd_offset_k(TASK_SIZE);
+ init_pud = pud_offset(init_pgd, TASK_SIZE);
+ init_pmd = pmd_offset(init_pud, TASK_SIZE);
+ new_pmd = pmd_offset(new_pud, TASK_SIZE);
+ memcpy(new_pmd, init_pmd,
+ (pmd_index(MODULES_VADDR)-pmd_index(TASK_SIZE))
+ * sizeof(pmd_t));
+ clean_dcache_area(new_pmd, PTRS_PER_PMD*sizeof(pmd_t));
+#endif
+
#endif
if (!vectors_high()) {
--
2.17.1
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 6/7] ARM: Initialize the mapping of KASan shadow memory
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: mark.rutland, alexandre.belloni, mhocko, julien.thierry,
catalin.marinas, linux-kernel, dhowells, yamada.masahiro,
ryabinin.a.a, glider, kvmarm, Florian Fainelli, corbet,
Abbott Liu, daniel.lezcano, linux, kasan-dev,
bcm-kernel-feedback-list, Andrey Ryabinin, drjones,
vladimir.murzin, keescook, arnd, marc.zyngier, andre.przywara,
philip, jinb.park7, tglx, dvyukov, nico, gregkh, ard.biesheuvel,
linux-doc, christoffer.dall, geert, rob, pombredanne, akpm,
thgarnie, kirill.shutemov
From: Andrey Ryabinin <aryabinin@virtuozzo.com>
This patch initializes KASan shadow region's page table and memory.
There are two stage for KASan initializing:
1. At early boot stage the whole shadow region is mapped to just
one physical page (kasan_zero_page). It is finished by the function
kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
head-common.S)
---Andrey Ryabinin <aryabinin@virtuozzo.com>
2. After the calling of paging_init, we use kasan_zero_page as zero
shadow for some memory that KASan does not need to track, and we
allocate a new shadow space for the other memory that KASan need to
track. These issues are finished by the function kasan_init which is
call by setup_arch.
---Andrey Ryabinin <aryabinin@virtuozzo.com>
3. Add support ARM LPAE
If LPAE is enabled, KASan shadow region's mapping table need be copied
in the pgd_alloc() function.
---Abbott Liu <liuwenliang@huawei.com>
4. Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate,
kasan_pgd_populate from .meminit.text section to .init.text section.
---Reported by: Florian Fainelli <f.fainelli@gmail.com>
---Signed off by: Abbott Liu <liuwenliang@huawei.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Co-Developed-by: Abbott Liu <liuwenliang@huawei.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reported-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
arch/arm/include/asm/kasan.h | 35 ++++
arch/arm/include/asm/pgalloc.h | 9 +-
arch/arm/include/asm/thread_info.h | 4 +
arch/arm/kernel/head-common.S | 3 +
arch/arm/kernel/setup.c | 2 +
arch/arm/mm/Makefile | 3 +
arch/arm/mm/kasan_init.c | 302 +++++++++++++++++++++++++++++
arch/arm/mm/pgd.c | 14 ++
8 files changed, 370 insertions(+), 2 deletions(-)
create mode 100644 arch/arm/include/asm/kasan.h
create mode 100644 arch/arm/mm/kasan_init.c
diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h
new file mode 100644
index 000000000000..1801f4d30993
--- /dev/null
+++ b/arch/arm/include/asm/kasan.h
@@ -0,0 +1,35 @@
+/*
+ * arch/arm/include/asm/kasan.h
+ *
+ * Copyright (c) 2015 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#ifndef __ASM_KASAN_H
+#define __ASM_KASAN_H
+
+#ifdef CONFIG_KASAN
+
+#include <asm/kasan_def.h>
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+
+/*
+ * Compiler uses shadow offset assuming that addresses start
+ * from 0. Kernel addresses don't start from 0, so shadow
+ * for kernel really starts from 'compiler's shadow offset' +
+ * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT)
+ */
+
+extern void kasan_init(void);
+
+#else
+static inline void kasan_init(void) { }
+#endif
+
+#endif
diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
index 069da393110c..d969f8058b26 100644
--- a/arch/arm/include/asm/pgalloc.h
+++ b/arch/arm/include/asm/pgalloc.h
@@ -21,6 +21,7 @@
#define _PAGE_KERNEL_TABLE (PMD_TYPE_TABLE | PMD_BIT4 | PMD_DOMAIN(DOMAIN_KERNEL))
#ifdef CONFIG_ARM_LPAE
+#define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t))
static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
{
@@ -39,14 +40,18 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
}
#else /* !CONFIG_ARM_LPAE */
+#define PGD_SIZE (PAGE_SIZE << 2)
/*
* Since we have only two-level page tables, these are trivial
*/
#define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); })
#define pmd_free(mm, pmd) do { } while (0)
-#define pud_populate(mm,pmd,pte) BUG()
-
+#ifndef CONFIG_KASAN
+#define pud_populate(mm, pmd, pte) BUG()
+#else
+#define pud_populate(mm, pmd, pte) do { } while (0)
+#endif
#endif /* CONFIG_ARM_LPAE */
extern pgd_t *pgd_alloc(struct mm_struct *mm);
diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
index 0d0d5178e2c3..2c940dcc953b 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -13,7 +13,11 @@
#include <asm/fpstate.h>
#include <asm/page.h>
+#ifdef CONFIG_KASAN
+#define THREAD_SIZE_ORDER 2
+#else
#define THREAD_SIZE_ORDER 1
+#endif
#define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
#define THREAD_START_SP (THREAD_SIZE - 8)
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S
index 6840c7c60a85..89c80154b9ef 100644
--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -111,6 +111,9 @@ __mmap_switched:
str r8, [r2] @ Save atags pointer
cmp r3, #0
strne r10, [r3] @ Save control register values
+#ifdef CONFIG_KASAN
+ bl kasan_early_init
+#endif
mov lr, #0
b start_kernel
ENDPROC(__mmap_switched)
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index d0a464e317ea..b120df6325dc 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -58,6 +58,7 @@
#include <asm/unwind.h>
#include <asm/memblock.h>
#include <asm/virt.h>
+#include <asm/kasan.h>
#include "atags.h"
@@ -1130,6 +1131,7 @@ void __init setup_arch(char **cmdline_p)
early_ioremap_reset();
paging_init(mdesc);
+ kasan_init();
request_standard_resources(mdesc);
if (mdesc->restart)
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 432302911d6e..1c937135c9c4 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -112,3 +112,6 @@ obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o
obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o
obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o
obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o
+
+KASAN_SANITIZE_kasan_init.o := n
+obj-$(CONFIG_KASAN) += kasan_init.o
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
new file mode 100644
index 000000000000..7597efb36cb0
--- /dev/null
+++ b/arch/arm/mm/kasan_init.c
@@ -0,0 +1,302 @@
+/*
+ * This file contains kasan initialization code for ARM.
+ *
+ * Copyright (c) 2018 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kasan: " fmt
+#include <linux/kasan.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/sched/task.h>
+#include <linux/start_kernel.h>
+#include <asm/cputype.h>
+#include <asm/highmem.h>
+#include <asm/mach/map.h>
+#include <asm/memory.h>
+#include <asm/page.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/procinfo.h>
+#include <asm/proc-fns.h>
+#include <asm/tlbflush.h>
+#include <asm/cp15.h>
+
+#include "mm.h"
+
+static pgd_t tmp_pgd_table[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE);
+
+pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
+
+static __init void *kasan_alloc_block(size_t size, int node)
+{
+ return memblock_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
+ MEMBLOCK_ALLOC_KASAN, node);
+}
+
+static void __init kasan_early_pmd_populate(unsigned long start,
+ unsigned long end, pud_t *pud)
+{
+ unsigned long addr;
+ unsigned long next;
+ pmd_t *pmd;
+
+ pmd = pmd_offset(pud, start);
+ for (addr = start; addr < end;) {
+ pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
+ next = pmd_addr_end(addr, end);
+ addr = next;
+ flush_pmd_entry(pmd);
+ pmd++;
+ }
+}
+
+static void __init kasan_early_pud_populate(unsigned long start,
+ unsigned long end, pgd_t *pgd)
+{
+ unsigned long addr;
+ unsigned long next;
+ pud_t *pud;
+
+ pud = pud_offset(pgd, start);
+ for (addr = start; addr < end;) {
+ next = pud_addr_end(addr, end);
+ kasan_early_pmd_populate(addr, next, pud);
+ addr = next;
+ pud++;
+ }
+}
+
+void __init kasan_map_early_shadow(pgd_t *pgdp)
+{
+ int i;
+ unsigned long start = KASAN_SHADOW_START;
+ unsigned long end = KASAN_SHADOW_END;
+ unsigned long addr;
+ unsigned long next;
+ pgd_t *pgd;
+
+ for (i = 0; i < PTRS_PER_PTE; i++)
+ set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
+ &kasan_early_shadow_pte[i], pfn_pte(
+ virt_to_pfn(kasan_early_shadow_page),
+ __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY
+ | L_PTE_XN)));
+
+ pgd = pgd_offset_k(start);
+ for (addr = start; addr < end;) {
+ next = pgd_addr_end(addr, end);
+ kasan_early_pud_populate(addr, next, pgd);
+ addr = next;
+ pgd++;
+ }
+}
+
+extern struct proc_info_list *lookup_processor_type(unsigned int);
+
+void __init kasan_early_init(void)
+{
+ struct proc_info_list *list;
+
+ /*
+ * locate processor in the list of supported processor
+ * types. The linker builds this table for us from the
+ * entries in arch/arm/mm/proc-*.S
+ */
+ list = lookup_processor_type(read_cpuid_id());
+ if (list) {
+#ifdef MULTI_CPU
+ processor = *list->proc;
+#endif
+ }
+
+ BUILD_BUG_ON((KASAN_SHADOW_END - (1UL << 29)) != KASAN_SHADOW_OFFSET);
+ kasan_map_early_shadow(swapper_pg_dir);
+}
+
+static void __init clear_pgds(unsigned long start,
+ unsigned long end)
+{
+ for (; start && start < end; start += PMD_SIZE)
+ pmd_clear(pmd_off_k(start));
+}
+
+pte_t * __init kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node)
+{
+ pte_t *pte = pte_offset_kernel(pmd, addr);
+
+ if (pte_none(*pte)) {
+ pte_t entry;
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+ if (!p)
+ return NULL;
+ entry = pfn_pte(virt_to_pfn(p),
+ __pgprot(pgprot_val(PAGE_KERNEL)));
+ set_pte_at(&init_mm, addr, pte, entry);
+ }
+ return pte;
+}
+
+pmd_t * __init kasan_pmd_populate(pud_t *pud, unsigned long addr, int node)
+{
+ pmd_t *pmd = pmd_offset(pud, addr);
+
+ if (pmd_none(*pmd)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+ if (!p)
+ return NULL;
+ pmd_populate_kernel(&init_mm, pmd, p);
+ }
+ return pmd;
+}
+
+pud_t * __init kasan_pud_populate(pgd_t *pgd, unsigned long addr, int node)
+{
+ pud_t *pud = pud_offset(pgd, addr);
+
+ if (pud_none(*pud)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+ if (!p)
+ return NULL;
+ pr_err("populating pud addr %lx\n", addr);
+ pud_populate(&init_mm, pud, p);
+ }
+ return pud;
+}
+
+pgd_t * __init kasan_pgd_populate(unsigned long addr, int node)
+{
+ pgd_t *pgd = pgd_offset_k(addr);
+
+ if (pgd_none(*pgd)) {
+ void *p = kasan_alloc_block(PAGE_SIZE, node);
+
+ if (!p)
+ return NULL;
+ pgd_populate(&init_mm, pgd, p);
+ }
+ return pgd;
+}
+
+static int __init create_mapping(unsigned long start, unsigned long end,
+ int node)
+{
+ unsigned long addr = start;
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ pr_info("populating shadow for %lx, %lx\n", start, end);
+
+ for (; addr < end; addr += PAGE_SIZE) {
+ pgd = kasan_pgd_populate(addr, node);
+ if (!pgd)
+ return -ENOMEM;
+
+ pud = kasan_pud_populate(pgd, addr, node);
+ if (!pud)
+ return -ENOMEM;
+
+ pmd = kasan_pmd_populate(pud, addr, node);
+ if (!pmd)
+ return -ENOMEM;
+
+ pte = kasan_pte_populate(pmd, addr, node);
+ if (!pte)
+ return -ENOMEM;
+ }
+ return 0;
+}
+
+
+void __init kasan_init(void)
+{
+ struct memblock_region *reg;
+ u64 orig_ttbr0;
+ int i;
+
+ /*
+ * We are going to perform proper setup of shadow memory.
+ * At first we should unmap early shadow (clear_pgds() call bellow).
+ * However, instrumented code couldn't execute without shadow memory.
+ * tmp_pgd_table and tmp_pmd_table used to keep early shadow mapped
+ * until full shadow setup will be finished.
+ */
+ orig_ttbr0 = get_ttbr0();
+
+#ifdef CONFIG_ARM_LPAE
+ memcpy(tmp_pmd_table,
+ pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)),
+ sizeof(tmp_pmd_table));
+ memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table));
+ set_pgd(&tmp_pgd_table[pgd_index(KASAN_SHADOW_START)],
+ __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER));
+ set_ttbr0(__pa(tmp_pgd_table));
+#else
+ memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table));
+ set_ttbr0((u64)__pa(tmp_pgd_table));
+#endif
+ flush_cache_all();
+ local_flush_bp_all();
+ local_flush_tlb_all();
+
+ clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+ kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
+ kasan_mem_to_shadow((void *)-1UL) + 1);
+
+ for_each_memblock(memory, reg) {
+ void *start = __va(reg->base);
+ void *end = __va(reg->base + reg->size);
+
+ if (reg->base + reg->size > arm_lowmem_limit)
+ end = __va(arm_lowmem_limit);
+ if (start >= end)
+ break;
+
+ create_mapping((unsigned long)kasan_mem_to_shadow(start),
+ (unsigned long)kasan_mem_to_shadow(end),
+ NUMA_NO_NODE);
+ }
+
+ /*1.the module's global variable is in MODULES_VADDR ~ MODULES_END,
+ * so we need mapping.
+ *2.PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR
+ * ~ MODULES_END's shadow is in the same PMD_SIZE, so we cant
+ * use kasan_populate_zero_shadow.
+ */
+ create_mapping(
+ (unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR),
+
+ (unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE +
+ PMD_SIZE)),
+ NUMA_NO_NODE);
+
+ /*
+ * KAsan may reuse the contents of kasan_early_shadow_pte directly, so
+ * we should make sure that it maps the zero page read-only.
+ */
+ for (i = 0; i < PTRS_PER_PTE; i++)
+ set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE,
+ &kasan_early_shadow_pte[i],
+ pfn_pte(virt_to_pfn(kasan_early_shadow_page),
+ __pgprot(pgprot_val(PAGE_KERNEL)
+ | L_PTE_RDONLY)));
+ memset(kasan_early_shadow_page, 0, PAGE_SIZE);
+ set_ttbr0(orig_ttbr0);
+ flush_cache_all();
+ local_flush_bp_all();
+ local_flush_tlb_all();
+ pr_info("Kernel address sanitizer initialized\n");
+ init_task.kasan_depth = 0;
+}
diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c
index 478bd2c6aa50..92a408262df2 100644
--- a/arch/arm/mm/pgd.c
+++ b/arch/arm/mm/pgd.c
@@ -61,6 +61,20 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
new_pmd = pmd_alloc(mm, new_pud, 0);
if (!new_pmd)
goto no_pmd;
+#ifdef CONFIG_KASAN
+ /*
+ *Copy PMD table for KASAN shadow mappings.
+ */
+ init_pgd = pgd_offset_k(TASK_SIZE);
+ init_pud = pud_offset(init_pgd, TASK_SIZE);
+ init_pmd = pmd_offset(init_pud, TASK_SIZE);
+ new_pmd = pmd_offset(new_pud, TASK_SIZE);
+ memcpy(new_pmd, init_pmd,
+ (pmd_index(MODULES_VADDR)-pmd_index(TASK_SIZE))
+ * sizeof(pmd_t));
+ clean_dcache_area(new_pmd, PTRS_PER_PMD*sizeof(pmd_t));
+#endif
+
#endif
if (!vectors_high()) {
--
2.17.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 7/7] ARM: Enable KASan for ARM
2020-01-17 22:48 ` Florian Fainelli
(?)
@ 2020-01-17 22:48 ` Florian Fainelli
-1 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Andrey Ryabinin, Abbott Liu, Florian Fainelli,
bcm-kernel-feedback-list, glider, dvyukov, corbet, linux,
christoffer.dall, marc.zyngier, arnd, nico, vladimir.murzin,
keescook, jinb.park7, alexandre.belloni, ard.biesheuvel,
daniel.lezcano, pombredanne, rob, gregkh, akpm, mark.rutland,
catalin.marinas, yamada.masahiro, tglx, thgarnie, dhowells,
geert, andre.przywara, julien.thierry, drjones, philip, mhocko,
kirill.shutemov, kasan-dev, linux-doc, linux-kernel, kvmarm,
ryabinin.a.a
From: Andrey Ryabinin <ryabinin@virtuozzo.com>
This patch enables the kernel address sanitizer for ARM. XIP_KERNEL has
not been tested and is therefore not allowed.
Acked-by: Dmitry Vyukov <dvyukov@google.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Documentation/dev-tools/kasan.rst | 4 ++--
arch/arm/Kconfig | 9 +++++++++
arch/arm/boot/compressed/Makefile | 1 +
drivers/firmware/efi/libstub/Makefile | 3 ++-
4 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index e4d66e7c50de..6acd949989c3 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -21,8 +21,8 @@ global variables yet.
Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later.
-Currently generic KASAN is supported for the x86_64, arm64, xtensa and s390
-architectures, and tag-based KASAN is supported only for arm64.
+Currently generic KASAN is supported for the x86_64, arm, arm64, xtensa and
+s390 architectures, and tag-based KASAN is supported only for arm64.
Usage
-----
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 96dab76da3b3..70a7eb50984e 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -65,6 +65,7 @@ config ARM
select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
+ select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
select HAVE_ARCH_THREAD_STRUCT_WHITELIST
@@ -212,6 +213,14 @@ config ARCH_MAY_HAVE_PC_FDC
config ZONE_DMA
bool
+config KASAN_SHADOW_OFFSET
+ hex
+ depends on KASAN
+ default 0x1f000000 if PAGE_OFFSET=0x40000000
+ default 0x5f000000 if PAGE_OFFSET=0x80000000
+ default 0x9f000000 if PAGE_OFFSET=0xC0000000
+ default 0xffffffff
+
config ARCH_SUPPORTS_UPROBES
def_bool y
diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index 83991a0447fa..efda24b00a44 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -25,6 +25,7 @@ endif
GCOV_PROFILE := n
KASAN_SANITIZE := n
+CFLAGS_KERNEL += -D__SANITIZE_ADDRESS__
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
KCOV_INSTRUMENT := n
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index c35f893897e1..c8b36824189b 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -20,7 +20,8 @@ cflags-$(CONFIG_ARM64) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
-fpie $(DISABLE_STACKLEAK_PLUGIN)
cflags-$(CONFIG_ARM) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
-fno-builtin -fpic \
- $(call cc-option,-mno-single-pic-base)
+ $(call cc-option,-mno-single-pic-base) \
+ -D__SANITIZE_ADDRESS__
cflags-$(CONFIG_EFI_ARMSTUB) += -I$(srctree)/scripts/dtc/libfdt
--
2.17.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 7/7] ARM: Enable KASan for ARM
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: alexandre.belloni, mhocko, catalin.marinas, linux-kernel,
dhowells, yamada.masahiro, ryabinin.a.a, glider, kvmarm,
Florian Fainelli, corbet, Abbott Liu, daniel.lezcano, linux,
kasan-dev, bcm-kernel-feedback-list, geert, keescook, arnd,
marc.zyngier, andre.przywara, philip, jinb.park7, tglx, dvyukov,
nico, gregkh, ard.biesheuvel, linux-doc, thgarnie, rob,
pombredanne, akpm, Andrey Ryabinin, kirill.shutemov
From: Andrey Ryabinin <ryabinin@virtuozzo.com>
This patch enables the kernel address sanitizer for ARM. XIP_KERNEL has
not been tested and is therefore not allowed.
Acked-by: Dmitry Vyukov <dvyukov@google.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Documentation/dev-tools/kasan.rst | 4 ++--
arch/arm/Kconfig | 9 +++++++++
arch/arm/boot/compressed/Makefile | 1 +
drivers/firmware/efi/libstub/Makefile | 3 ++-
4 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index e4d66e7c50de..6acd949989c3 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -21,8 +21,8 @@ global variables yet.
Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later.
-Currently generic KASAN is supported for the x86_64, arm64, xtensa and s390
-architectures, and tag-based KASAN is supported only for arm64.
+Currently generic KASAN is supported for the x86_64, arm, arm64, xtensa and
+s390 architectures, and tag-based KASAN is supported only for arm64.
Usage
-----
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 96dab76da3b3..70a7eb50984e 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -65,6 +65,7 @@ config ARM
select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
+ select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
select HAVE_ARCH_THREAD_STRUCT_WHITELIST
@@ -212,6 +213,14 @@ config ARCH_MAY_HAVE_PC_FDC
config ZONE_DMA
bool
+config KASAN_SHADOW_OFFSET
+ hex
+ depends on KASAN
+ default 0x1f000000 if PAGE_OFFSET=0x40000000
+ default 0x5f000000 if PAGE_OFFSET=0x80000000
+ default 0x9f000000 if PAGE_OFFSET=0xC0000000
+ default 0xffffffff
+
config ARCH_SUPPORTS_UPROBES
def_bool y
diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index 83991a0447fa..efda24b00a44 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -25,6 +25,7 @@ endif
GCOV_PROFILE := n
KASAN_SANITIZE := n
+CFLAGS_KERNEL += -D__SANITIZE_ADDRESS__
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
KCOV_INSTRUMENT := n
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index c35f893897e1..c8b36824189b 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -20,7 +20,8 @@ cflags-$(CONFIG_ARM64) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
-fpie $(DISABLE_STACKLEAK_PLUGIN)
cflags-$(CONFIG_ARM) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
-fno-builtin -fpic \
- $(call cc-option,-mno-single-pic-base)
+ $(call cc-option,-mno-single-pic-base) \
+ -D__SANITIZE_ADDRESS__
cflags-$(CONFIG_EFI_ARMSTUB) += -I$(srctree)/scripts/dtc/libfdt
--
2.17.1
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v7 7/7] ARM: Enable KASan for ARM
@ 2020-01-17 22:48 ` Florian Fainelli
0 siblings, 0 replies; 33+ messages in thread
From: Florian Fainelli @ 2020-01-17 22:48 UTC (permalink / raw)
To: linux-arm-kernel
Cc: mark.rutland, alexandre.belloni, mhocko, julien.thierry,
catalin.marinas, linux-kernel, dhowells, yamada.masahiro,
ryabinin.a.a, glider, kvmarm, Florian Fainelli, corbet,
Abbott Liu, daniel.lezcano, linux, kasan-dev,
bcm-kernel-feedback-list, geert, drjones, vladimir.murzin,
keescook, arnd, marc.zyngier, andre.przywara, philip, jinb.park7,
tglx, dvyukov, nico, gregkh, ard.biesheuvel, linux-doc,
christoffer.dall, thgarnie, rob, pombredanne, akpm,
Andrey Ryabinin, kirill.shutemov
From: Andrey Ryabinin <ryabinin@virtuozzo.com>
This patch enables the kernel address sanitizer for ARM. XIP_KERNEL has
not been tested and is therefore not allowed.
Acked-by: Dmitry Vyukov <dvyukov@google.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
---
Documentation/dev-tools/kasan.rst | 4 ++--
arch/arm/Kconfig | 9 +++++++++
arch/arm/boot/compressed/Makefile | 1 +
drivers/firmware/efi/libstub/Makefile | 3 ++-
4 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index e4d66e7c50de..6acd949989c3 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -21,8 +21,8 @@ global variables yet.
Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later.
-Currently generic KASAN is supported for the x86_64, arm64, xtensa and s390
-architectures, and tag-based KASAN is supported only for arm64.
+Currently generic KASAN is supported for the x86_64, arm, arm64, xtensa and
+s390 architectures, and tag-based KASAN is supported only for arm64.
Usage
-----
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 96dab76da3b3..70a7eb50984e 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -65,6 +65,7 @@ config ARM
select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
+ select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
select HAVE_ARCH_THREAD_STRUCT_WHITELIST
@@ -212,6 +213,14 @@ config ARCH_MAY_HAVE_PC_FDC
config ZONE_DMA
bool
+config KASAN_SHADOW_OFFSET
+ hex
+ depends on KASAN
+ default 0x1f000000 if PAGE_OFFSET=0x40000000
+ default 0x5f000000 if PAGE_OFFSET=0x80000000
+ default 0x9f000000 if PAGE_OFFSET=0xC0000000
+ default 0xffffffff
+
config ARCH_SUPPORTS_UPROBES
def_bool y
diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index 83991a0447fa..efda24b00a44 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -25,6 +25,7 @@ endif
GCOV_PROFILE := n
KASAN_SANITIZE := n
+CFLAGS_KERNEL += -D__SANITIZE_ADDRESS__
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
KCOV_INSTRUMENT := n
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index c35f893897e1..c8b36824189b 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -20,7 +20,8 @@ cflags-$(CONFIG_ARM64) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
-fpie $(DISABLE_STACKLEAK_PLUGIN)
cflags-$(CONFIG_ARM) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
-fno-builtin -fpic \
- $(call cc-option,-mno-single-pic-base)
+ $(call cc-option,-mno-single-pic-base) \
+ -D__SANITIZE_ADDRESS__
cflags-$(CONFIG_EFI_ARMSTUB) += -I$(srctree)/scripts/dtc/libfdt
--
2.17.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [PATCH v7 7/7] ARM: Enable KASan for ARM
2020-01-17 22:48 ` Florian Fainelli
(?)
@ 2020-04-10 10:45 ` Ard Biesheuvel
-1 siblings, 0 replies; 33+ messages in thread
From: Ard Biesheuvel @ 2020-04-10 10:45 UTC (permalink / raw)
To: Florian Fainelli
Cc: linux-arm-kernel, Andrey Ryabinin, Abbott Liu,
bcm-kernel-feedback-list, Alexander Potapenko, Dmitry Vyukov,
Jonathan Corbet, Russell King, Christoffer Dall, Marc Zyngier,
Arnd Bergmann, Nicolas Pitre, Vladimir Murzin, Kees Cook,
Jinbum Park, Alexandre Belloni, Daniel Lezcano,
Philippe Ombredanne, Rob Landley, Greg Kroah-Hartman,
Andrew Morton, Mark Rutland, Catalin Marinas, Masahiro Yamada,
Thomas Gleixner, Thomas Garnier, David Howells,
Geert Uytterhoeven, Andre Przywara, Julien Thierry, Andrew Jones,
Philip Derrin, Michal Hocko, Kirill A. Shutemov, kasan-dev,
Linux Doc Mailing List, Linux Kernel Mailing List, kvmarm,
Andrey Ryabinin
On Fri, 17 Jan 2020 at 23:52, Florian Fainelli <f.fainelli@gmail.com> wrote:
>
> From: Andrey Ryabinin <ryabinin@virtuozzo.com>
>
> This patch enables the kernel address sanitizer for ARM. XIP_KERNEL has
> not been tested and is therefore not allowed.
>
> Acked-by: Dmitry Vyukov <dvyukov@google.com>
> Tested-by: Linus Walleij <linus.walleij@linaro.org>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
> ---
> Documentation/dev-tools/kasan.rst | 4 ++--
> arch/arm/Kconfig | 9 +++++++++
> arch/arm/boot/compressed/Makefile | 1 +
> drivers/firmware/efi/libstub/Makefile | 3 ++-
> 4 files changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
> index e4d66e7c50de..6acd949989c3 100644
> --- a/Documentation/dev-tools/kasan.rst
> +++ b/Documentation/dev-tools/kasan.rst
> @@ -21,8 +21,8 @@ global variables yet.
>
> Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later.
>
> -Currently generic KASAN is supported for the x86_64, arm64, xtensa and s390
> -architectures, and tag-based KASAN is supported only for arm64.
> +Currently generic KASAN is supported for the x86_64, arm, arm64, xtensa and
> +s390 architectures, and tag-based KASAN is supported only for arm64.
>
> Usage
> -----
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index 96dab76da3b3..70a7eb50984e 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -65,6 +65,7 @@ config ARM
> select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
> select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
> select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
> + select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
> select HAVE_ARCH_MMAP_RND_BITS if MMU
> select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
> select HAVE_ARCH_THREAD_STRUCT_WHITELIST
> @@ -212,6 +213,14 @@ config ARCH_MAY_HAVE_PC_FDC
> config ZONE_DMA
> bool
>
> +config KASAN_SHADOW_OFFSET
> + hex
> + depends on KASAN
> + default 0x1f000000 if PAGE_OFFSET=0x40000000
> + default 0x5f000000 if PAGE_OFFSET=0x80000000
> + default 0x9f000000 if PAGE_OFFSET=0xC0000000
> + default 0xffffffff
> +
> config ARCH_SUPPORTS_UPROBES
> def_bool y
>
> diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
> index 83991a0447fa..efda24b00a44 100644
> --- a/arch/arm/boot/compressed/Makefile
> +++ b/arch/arm/boot/compressed/Makefile
> @@ -25,6 +25,7 @@ endif
>
> GCOV_PROFILE := n
> KASAN_SANITIZE := n
> +CFLAGS_KERNEL += -D__SANITIZE_ADDRESS__
>
> # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
> KCOV_INSTRUMENT := n
> diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> index c35f893897e1..c8b36824189b 100644
> --- a/drivers/firmware/efi/libstub/Makefile
> +++ b/drivers/firmware/efi/libstub/Makefile
> @@ -20,7 +20,8 @@ cflags-$(CONFIG_ARM64) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
> -fpie $(DISABLE_STACKLEAK_PLUGIN)
> cflags-$(CONFIG_ARM) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
> -fno-builtin -fpic \
> - $(call cc-option,-mno-single-pic-base)
> + $(call cc-option,-mno-single-pic-base) \
> + -D__SANITIZE_ADDRESS__
>
I am not too crazy about this need to unconditionally 'enable' KASAN
on the command line like this, in order to be able to disable it again
when CONFIG_KASAN=y.
Could we instead add something like this at the top of
arch/arm/boot/compressed/string.c?
#ifdef CONFIG_KASAN
#undef memcpy
#undef memmove
#undef memset
void *__memcpy(void *__dest, __const void *__src, size_t __n) __alias(memcpy);
void *__memmove(void *__dest, __const void *__src, size_t count)
__alias(memmove);
void *__memset(void *s, int c, size_t count) __alias(memset);
#endif
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v7 7/7] ARM: Enable KASan for ARM
@ 2020-04-10 10:45 ` Ard Biesheuvel
0 siblings, 0 replies; 33+ messages in thread
From: Ard Biesheuvel @ 2020-04-10 10:45 UTC (permalink / raw)
To: Florian Fainelli
Cc: Alexandre Belloni, Michal Hocko, Catalin Marinas,
Linux Kernel Mailing List, David Howells, Masahiro Yamada,
Andrey Ryabinin, Alexander Potapenko, kvmarm, Jonathan Corbet,
Abbott Liu, Daniel Lezcano, Russell King, kasan-dev,
bcm-kernel-feedback-list, Dmitry Vyukov, Geert Uytterhoeven,
Kees Cook, Arnd Bergmann, Marc Zyngier, Andre Przywara,
Philip Derrin, Jinbum Park, Thomas Gleixner, linux-arm-kernel,
Nicolas Pitre, Greg Kroah-Hartman, Linux Doc Mailing List,
Thomas Garnier, Rob Landley, Philippe Ombredanne, Andrew Morton,
Andrey Ryabinin, Kirill A. Shutemov
On Fri, 17 Jan 2020 at 23:52, Florian Fainelli <f.fainelli@gmail.com> wrote:
>
> From: Andrey Ryabinin <ryabinin@virtuozzo.com>
>
> This patch enables the kernel address sanitizer for ARM. XIP_KERNEL has
> not been tested and is therefore not allowed.
>
> Acked-by: Dmitry Vyukov <dvyukov@google.com>
> Tested-by: Linus Walleij <linus.walleij@linaro.org>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
> ---
> Documentation/dev-tools/kasan.rst | 4 ++--
> arch/arm/Kconfig | 9 +++++++++
> arch/arm/boot/compressed/Makefile | 1 +
> drivers/firmware/efi/libstub/Makefile | 3 ++-
> 4 files changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
> index e4d66e7c50de..6acd949989c3 100644
> --- a/Documentation/dev-tools/kasan.rst
> +++ b/Documentation/dev-tools/kasan.rst
> @@ -21,8 +21,8 @@ global variables yet.
>
> Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later.
>
> -Currently generic KASAN is supported for the x86_64, arm64, xtensa and s390
> -architectures, and tag-based KASAN is supported only for arm64.
> +Currently generic KASAN is supported for the x86_64, arm, arm64, xtensa and
> +s390 architectures, and tag-based KASAN is supported only for arm64.
>
> Usage
> -----
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index 96dab76da3b3..70a7eb50984e 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -65,6 +65,7 @@ config ARM
> select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
> select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
> select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
> + select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
> select HAVE_ARCH_MMAP_RND_BITS if MMU
> select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
> select HAVE_ARCH_THREAD_STRUCT_WHITELIST
> @@ -212,6 +213,14 @@ config ARCH_MAY_HAVE_PC_FDC
> config ZONE_DMA
> bool
>
> +config KASAN_SHADOW_OFFSET
> + hex
> + depends on KASAN
> + default 0x1f000000 if PAGE_OFFSET=0x40000000
> + default 0x5f000000 if PAGE_OFFSET=0x80000000
> + default 0x9f000000 if PAGE_OFFSET=0xC0000000
> + default 0xffffffff
> +
> config ARCH_SUPPORTS_UPROBES
> def_bool y
>
> diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
> index 83991a0447fa..efda24b00a44 100644
> --- a/arch/arm/boot/compressed/Makefile
> +++ b/arch/arm/boot/compressed/Makefile
> @@ -25,6 +25,7 @@ endif
>
> GCOV_PROFILE := n
> KASAN_SANITIZE := n
> +CFLAGS_KERNEL += -D__SANITIZE_ADDRESS__
>
> # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
> KCOV_INSTRUMENT := n
> diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> index c35f893897e1..c8b36824189b 100644
> --- a/drivers/firmware/efi/libstub/Makefile
> +++ b/drivers/firmware/efi/libstub/Makefile
> @@ -20,7 +20,8 @@ cflags-$(CONFIG_ARM64) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
> -fpie $(DISABLE_STACKLEAK_PLUGIN)
> cflags-$(CONFIG_ARM) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
> -fno-builtin -fpic \
> - $(call cc-option,-mno-single-pic-base)
> + $(call cc-option,-mno-single-pic-base) \
> + -D__SANITIZE_ADDRESS__
>
I am not too crazy about this need to unconditionally 'enable' KASAN
on the command line like this, in order to be able to disable it again
when CONFIG_KASAN=y.
Could we instead add something like this at the top of
arch/arm/boot/compressed/string.c?
#ifdef CONFIG_KASAN
#undef memcpy
#undef memmove
#undef memset
void *__memcpy(void *__dest, __const void *__src, size_t __n) __alias(memcpy);
void *__memmove(void *__dest, __const void *__src, size_t count)
__alias(memmove);
void *__memset(void *s, int c, size_t count) __alias(memset);
#endif
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v7 7/7] ARM: Enable KASan for ARM
@ 2020-04-10 10:45 ` Ard Biesheuvel
0 siblings, 0 replies; 33+ messages in thread
From: Ard Biesheuvel @ 2020-04-10 10:45 UTC (permalink / raw)
To: Florian Fainelli
Cc: Mark Rutland, Alexandre Belloni, Michal Hocko, Julien Thierry,
Catalin Marinas, Linux Kernel Mailing List, David Howells,
Masahiro Yamada, Andrey Ryabinin, Alexander Potapenko, kvmarm,
Jonathan Corbet, Abbott Liu, Daniel Lezcano, Russell King,
kasan-dev, bcm-kernel-feedback-list, Dmitry Vyukov,
Geert Uytterhoeven, Andrew Jones, Vladimir Murzin, Kees Cook,
Arnd Bergmann, Marc Zyngier, Andre Przywara, Philip Derrin,
Jinbum Park, Thomas Gleixner, linux-arm-kernel, Nicolas Pitre,
Greg Kroah-Hartman, Linux Doc Mailing List, Christoffer Dall,
Thomas Garnier, Rob Landley, Philippe Ombredanne, Andrew Morton,
Andrey Ryabinin, Kirill A. Shutemov
On Fri, 17 Jan 2020 at 23:52, Florian Fainelli <f.fainelli@gmail.com> wrote:
>
> From: Andrey Ryabinin <ryabinin@virtuozzo.com>
>
> This patch enables the kernel address sanitizer for ARM. XIP_KERNEL has
> not been tested and is therefore not allowed.
>
> Acked-by: Dmitry Vyukov <dvyukov@google.com>
> Tested-by: Linus Walleij <linus.walleij@linaro.org>
> Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
> ---
> Documentation/dev-tools/kasan.rst | 4 ++--
> arch/arm/Kconfig | 9 +++++++++
> arch/arm/boot/compressed/Makefile | 1 +
> drivers/firmware/efi/libstub/Makefile | 3 ++-
> 4 files changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
> index e4d66e7c50de..6acd949989c3 100644
> --- a/Documentation/dev-tools/kasan.rst
> +++ b/Documentation/dev-tools/kasan.rst
> @@ -21,8 +21,8 @@ global variables yet.
>
> Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later.
>
> -Currently generic KASAN is supported for the x86_64, arm64, xtensa and s390
> -architectures, and tag-based KASAN is supported only for arm64.
> +Currently generic KASAN is supported for the x86_64, arm, arm64, xtensa and
> +s390 architectures, and tag-based KASAN is supported only for arm64.
>
> Usage
> -----
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index 96dab76da3b3..70a7eb50984e 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -65,6 +65,7 @@ config ARM
> select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
> select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
> select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
> + select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
> select HAVE_ARCH_MMAP_RND_BITS if MMU
> select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
> select HAVE_ARCH_THREAD_STRUCT_WHITELIST
> @@ -212,6 +213,14 @@ config ARCH_MAY_HAVE_PC_FDC
> config ZONE_DMA
> bool
>
> +config KASAN_SHADOW_OFFSET
> + hex
> + depends on KASAN
> + default 0x1f000000 if PAGE_OFFSET=0x40000000
> + default 0x5f000000 if PAGE_OFFSET=0x80000000
> + default 0x9f000000 if PAGE_OFFSET=0xC0000000
> + default 0xffffffff
> +
> config ARCH_SUPPORTS_UPROBES
> def_bool y
>
> diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
> index 83991a0447fa..efda24b00a44 100644
> --- a/arch/arm/boot/compressed/Makefile
> +++ b/arch/arm/boot/compressed/Makefile
> @@ -25,6 +25,7 @@ endif
>
> GCOV_PROFILE := n
> KASAN_SANITIZE := n
> +CFLAGS_KERNEL += -D__SANITIZE_ADDRESS__
>
> # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
> KCOV_INSTRUMENT := n
> diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> index c35f893897e1..c8b36824189b 100644
> --- a/drivers/firmware/efi/libstub/Makefile
> +++ b/drivers/firmware/efi/libstub/Makefile
> @@ -20,7 +20,8 @@ cflags-$(CONFIG_ARM64) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
> -fpie $(DISABLE_STACKLEAK_PLUGIN)
> cflags-$(CONFIG_ARM) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
> -fno-builtin -fpic \
> - $(call cc-option,-mno-single-pic-base)
> + $(call cc-option,-mno-single-pic-base) \
> + -D__SANITIZE_ADDRESS__
>
I am not too crazy about this need to unconditionally 'enable' KASAN
on the command line like this, in order to be able to disable it again
when CONFIG_KASAN=y.
Could we instead add something like this at the top of
arch/arm/boot/compressed/string.c?
#ifdef CONFIG_KASAN
#undef memcpy
#undef memmove
#undef memset
void *__memcpy(void *__dest, __const void *__src, size_t __n) __alias(memcpy);
void *__memmove(void *__dest, __const void *__src, size_t count)
__alias(memmove);
void *__memset(void *s, int c, size_t count) __alias(memset);
#endif
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v7 7/7] ARM: Enable KASan for ARM
2020-04-10 10:45 ` Ard Biesheuvel
(?)
@ 2020-04-10 10:47 ` Ard Biesheuvel
-1 siblings, 0 replies; 33+ messages in thread
From: Ard Biesheuvel @ 2020-04-10 10:47 UTC (permalink / raw)
To: Florian Fainelli, Linus Walleij
Cc: Mark Rutland, Alexandre Belloni, Michal Hocko, Julien Thierry,
Catalin Marinas, Linux Kernel Mailing List, David Howells,
Masahiro Yamada, Andrey Ryabinin, Alexander Potapenko, kvmarm,
Jonathan Corbet, Abbott Liu, Daniel Lezcano, Russell King,
kasan-dev, bcm-kernel-feedback-list, Dmitry Vyukov,
Geert Uytterhoeven, Andrew Jones, Vladimir Murzin, Kees Cook,
Arnd Bergmann, Marc Zyngier, Andre Przywara, Philip Derrin,
Jinbum Park, Thomas Gleixner, linux-arm-kernel, Nicolas Pitre,
Greg Kroah-Hartman, Linux Doc Mailing List, Christoffer Dall,
Thomas Garnier, Rob Landley, Philippe Ombredanne, Andrew Morton,
Andrey Ryabinin, Kirill A. Shutemov
(+ Linus)
On Fri, 10 Apr 2020 at 12:45, Ard Biesheuvel <ardb@kernel.org> wrote:
>
> On Fri, 17 Jan 2020 at 23:52, Florian Fainelli <f.fainelli@gmail.com> wrote:
> >
> > From: Andrey Ryabinin <ryabinin@virtuozzo.com>
> >
> > This patch enables the kernel address sanitizer for ARM. XIP_KERNEL has
> > not been tested and is therefore not allowed.
> >
> > Acked-by: Dmitry Vyukov <dvyukov@google.com>
> > Tested-by: Linus Walleij <linus.walleij@linaro.org>
> > Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> > Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
> > ---
> > Documentation/dev-tools/kasan.rst | 4 ++--
> > arch/arm/Kconfig | 9 +++++++++
> > arch/arm/boot/compressed/Makefile | 1 +
> > drivers/firmware/efi/libstub/Makefile | 3 ++-
> > 4 files changed, 14 insertions(+), 3 deletions(-)
> >
> > diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
> > index e4d66e7c50de..6acd949989c3 100644
> > --- a/Documentation/dev-tools/kasan.rst
> > +++ b/Documentation/dev-tools/kasan.rst
> > @@ -21,8 +21,8 @@ global variables yet.
> >
> > Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later.
> >
> > -Currently generic KASAN is supported for the x86_64, arm64, xtensa and s390
> > -architectures, and tag-based KASAN is supported only for arm64.
> > +Currently generic KASAN is supported for the x86_64, arm, arm64, xtensa and
> > +s390 architectures, and tag-based KASAN is supported only for arm64.
> >
> > Usage
> > -----
> > diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> > index 96dab76da3b3..70a7eb50984e 100644
> > --- a/arch/arm/Kconfig
> > +++ b/arch/arm/Kconfig
> > @@ -65,6 +65,7 @@ config ARM
> > select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
> > select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
> > select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
> > + select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
> > select HAVE_ARCH_MMAP_RND_BITS if MMU
> > select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
> > select HAVE_ARCH_THREAD_STRUCT_WHITELIST
> > @@ -212,6 +213,14 @@ config ARCH_MAY_HAVE_PC_FDC
> > config ZONE_DMA
> > bool
> >
> > +config KASAN_SHADOW_OFFSET
> > + hex
> > + depends on KASAN
> > + default 0x1f000000 if PAGE_OFFSET=0x40000000
> > + default 0x5f000000 if PAGE_OFFSET=0x80000000
> > + default 0x9f000000 if PAGE_OFFSET=0xC0000000
> > + default 0xffffffff
> > +
> > config ARCH_SUPPORTS_UPROBES
> > def_bool y
> >
> > diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
> > index 83991a0447fa..efda24b00a44 100644
> > --- a/arch/arm/boot/compressed/Makefile
> > +++ b/arch/arm/boot/compressed/Makefile
> > @@ -25,6 +25,7 @@ endif
> >
> > GCOV_PROFILE := n
> > KASAN_SANITIZE := n
> > +CFLAGS_KERNEL += -D__SANITIZE_ADDRESS__
> >
> > # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
> > KCOV_INSTRUMENT := n
> > diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> > index c35f893897e1..c8b36824189b 100644
> > --- a/drivers/firmware/efi/libstub/Makefile
> > +++ b/drivers/firmware/efi/libstub/Makefile
> > @@ -20,7 +20,8 @@ cflags-$(CONFIG_ARM64) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
> > -fpie $(DISABLE_STACKLEAK_PLUGIN)
> > cflags-$(CONFIG_ARM) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
> > -fno-builtin -fpic \
> > - $(call cc-option,-mno-single-pic-base)
> > + $(call cc-option,-mno-single-pic-base) \
> > + -D__SANITIZE_ADDRESS__
> >
>
> I am not too crazy about this need to unconditionally 'enable' KASAN
> on the command line like this, in order to be able to disable it again
> when CONFIG_KASAN=y.
>
> Could we instead add something like this at the top of
> arch/arm/boot/compressed/string.c?
>
> #ifdef CONFIG_KASAN
> #undef memcpy
> #undef memmove
> #undef memset
> void *__memcpy(void *__dest, __const void *__src, size_t __n) __alias(memcpy);
> void *__memmove(void *__dest, __const void *__src, size_t count)
> __alias(memmove);
> void *__memset(void *s, int c, size_t count) __alias(memset);
> #endif
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v7 7/7] ARM: Enable KASan for ARM
@ 2020-04-10 10:47 ` Ard Biesheuvel
0 siblings, 0 replies; 33+ messages in thread
From: Ard Biesheuvel @ 2020-04-10 10:47 UTC (permalink / raw)
To: Florian Fainelli, Linus Walleij
Cc: Alexandre Belloni, Michal Hocko, Catalin Marinas, David Howells,
Masahiro Yamada, Andrey Ryabinin, Alexander Potapenko, kvmarm,
Jonathan Corbet, Abbott Liu, Daniel Lezcano, Russell King,
kasan-dev, Geert Uytterhoeven, linux-arm-kernel,
bcm-kernel-feedback-list, Kees Cook, Arnd Bergmann, Marc Zyngier,
Andre Przywara, Philippe Ombredanne, Jinbum Park,
Thomas Gleixner, Dmitry Vyukov, Nicolas Pitre,
Greg Kroah-Hartman, Linux Doc Mailing List,
Linux Kernel Mailing List, Andrey Ryabinin, Rob Landley,
Philip Derrin, Andrew Morton, Thomas Garnier, Kirill A. Shutemov
(+ Linus)
On Fri, 10 Apr 2020 at 12:45, Ard Biesheuvel <ardb@kernel.org> wrote:
>
> On Fri, 17 Jan 2020 at 23:52, Florian Fainelli <f.fainelli@gmail.com> wrote:
> >
> > From: Andrey Ryabinin <ryabinin@virtuozzo.com>
> >
> > This patch enables the kernel address sanitizer for ARM. XIP_KERNEL has
> > not been tested and is therefore not allowed.
> >
> > Acked-by: Dmitry Vyukov <dvyukov@google.com>
> > Tested-by: Linus Walleij <linus.walleij@linaro.org>
> > Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> > Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
> > ---
> > Documentation/dev-tools/kasan.rst | 4 ++--
> > arch/arm/Kconfig | 9 +++++++++
> > arch/arm/boot/compressed/Makefile | 1 +
> > drivers/firmware/efi/libstub/Makefile | 3 ++-
> > 4 files changed, 14 insertions(+), 3 deletions(-)
> >
> > diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
> > index e4d66e7c50de..6acd949989c3 100644
> > --- a/Documentation/dev-tools/kasan.rst
> > +++ b/Documentation/dev-tools/kasan.rst
> > @@ -21,8 +21,8 @@ global variables yet.
> >
> > Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later.
> >
> > -Currently generic KASAN is supported for the x86_64, arm64, xtensa and s390
> > -architectures, and tag-based KASAN is supported only for arm64.
> > +Currently generic KASAN is supported for the x86_64, arm, arm64, xtensa and
> > +s390 architectures, and tag-based KASAN is supported only for arm64.
> >
> > Usage
> > -----
> > diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> > index 96dab76da3b3..70a7eb50984e 100644
> > --- a/arch/arm/Kconfig
> > +++ b/arch/arm/Kconfig
> > @@ -65,6 +65,7 @@ config ARM
> > select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
> > select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
> > select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
> > + select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
> > select HAVE_ARCH_MMAP_RND_BITS if MMU
> > select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
> > select HAVE_ARCH_THREAD_STRUCT_WHITELIST
> > @@ -212,6 +213,14 @@ config ARCH_MAY_HAVE_PC_FDC
> > config ZONE_DMA
> > bool
> >
> > +config KASAN_SHADOW_OFFSET
> > + hex
> > + depends on KASAN
> > + default 0x1f000000 if PAGE_OFFSET=0x40000000
> > + default 0x5f000000 if PAGE_OFFSET=0x80000000
> > + default 0x9f000000 if PAGE_OFFSET=0xC0000000
> > + default 0xffffffff
> > +
> > config ARCH_SUPPORTS_UPROBES
> > def_bool y
> >
> > diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
> > index 83991a0447fa..efda24b00a44 100644
> > --- a/arch/arm/boot/compressed/Makefile
> > +++ b/arch/arm/boot/compressed/Makefile
> > @@ -25,6 +25,7 @@ endif
> >
> > GCOV_PROFILE := n
> > KASAN_SANITIZE := n
> > +CFLAGS_KERNEL += -D__SANITIZE_ADDRESS__
> >
> > # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
> > KCOV_INSTRUMENT := n
> > diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> > index c35f893897e1..c8b36824189b 100644
> > --- a/drivers/firmware/efi/libstub/Makefile
> > +++ b/drivers/firmware/efi/libstub/Makefile
> > @@ -20,7 +20,8 @@ cflags-$(CONFIG_ARM64) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
> > -fpie $(DISABLE_STACKLEAK_PLUGIN)
> > cflags-$(CONFIG_ARM) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
> > -fno-builtin -fpic \
> > - $(call cc-option,-mno-single-pic-base)
> > + $(call cc-option,-mno-single-pic-base) \
> > + -D__SANITIZE_ADDRESS__
> >
>
> I am not too crazy about this need to unconditionally 'enable' KASAN
> on the command line like this, in order to be able to disable it again
> when CONFIG_KASAN=y.
>
> Could we instead add something like this at the top of
> arch/arm/boot/compressed/string.c?
>
> #ifdef CONFIG_KASAN
> #undef memcpy
> #undef memmove
> #undef memset
> void *__memcpy(void *__dest, __const void *__src, size_t __n) __alias(memcpy);
> void *__memmove(void *__dest, __const void *__src, size_t count)
> __alias(memmove);
> void *__memset(void *s, int c, size_t count) __alias(memset);
> #endif
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v7 7/7] ARM: Enable KASan for ARM
@ 2020-04-10 10:47 ` Ard Biesheuvel
0 siblings, 0 replies; 33+ messages in thread
From: Ard Biesheuvel @ 2020-04-10 10:47 UTC (permalink / raw)
To: Florian Fainelli, Linus Walleij
Cc: Mark Rutland, Alexandre Belloni, Michal Hocko, Julien Thierry,
Catalin Marinas, Christoffer Dall, David Howells,
Masahiro Yamada, Andrey Ryabinin, Alexander Potapenko, kvmarm,
Jonathan Corbet, Abbott Liu, Daniel Lezcano, Russell King,
kasan-dev, Geert Uytterhoeven, linux-arm-kernel,
bcm-kernel-feedback-list, Andrew Jones, Vladimir Murzin,
Kees Cook, Arnd Bergmann, Marc Zyngier, Andre Przywara,
Philippe Ombredanne, Jinbum Park, Thomas Gleixner, Dmitry Vyukov,
Nicolas Pitre, Greg Kroah-Hartman, Linux Doc Mailing List,
Linux Kernel Mailing List, Andrey Ryabinin, Rob Landley,
Philip Derrin, Andrew Morton, Thomas Garnier, Kirill A. Shutemov
(+ Linus)
On Fri, 10 Apr 2020 at 12:45, Ard Biesheuvel <ardb@kernel.org> wrote:
>
> On Fri, 17 Jan 2020 at 23:52, Florian Fainelli <f.fainelli@gmail.com> wrote:
> >
> > From: Andrey Ryabinin <ryabinin@virtuozzo.com>
> >
> > This patch enables the kernel address sanitizer for ARM. XIP_KERNEL has
> > not been tested and is therefore not allowed.
> >
> > Acked-by: Dmitry Vyukov <dvyukov@google.com>
> > Tested-by: Linus Walleij <linus.walleij@linaro.org>
> > Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
> > Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
> > ---
> > Documentation/dev-tools/kasan.rst | 4 ++--
> > arch/arm/Kconfig | 9 +++++++++
> > arch/arm/boot/compressed/Makefile | 1 +
> > drivers/firmware/efi/libstub/Makefile | 3 ++-
> > 4 files changed, 14 insertions(+), 3 deletions(-)
> >
> > diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
> > index e4d66e7c50de..6acd949989c3 100644
> > --- a/Documentation/dev-tools/kasan.rst
> > +++ b/Documentation/dev-tools/kasan.rst
> > @@ -21,8 +21,8 @@ global variables yet.
> >
> > Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later.
> >
> > -Currently generic KASAN is supported for the x86_64, arm64, xtensa and s390
> > -architectures, and tag-based KASAN is supported only for arm64.
> > +Currently generic KASAN is supported for the x86_64, arm, arm64, xtensa and
> > +s390 architectures, and tag-based KASAN is supported only for arm64.
> >
> > Usage
> > -----
> > diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> > index 96dab76da3b3..70a7eb50984e 100644
> > --- a/arch/arm/Kconfig
> > +++ b/arch/arm/Kconfig
> > @@ -65,6 +65,7 @@ config ARM
> > select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
> > select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
> > select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
> > + select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
> > select HAVE_ARCH_MMAP_RND_BITS if MMU
> > select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
> > select HAVE_ARCH_THREAD_STRUCT_WHITELIST
> > @@ -212,6 +213,14 @@ config ARCH_MAY_HAVE_PC_FDC
> > config ZONE_DMA
> > bool
> >
> > +config KASAN_SHADOW_OFFSET
> > + hex
> > + depends on KASAN
> > + default 0x1f000000 if PAGE_OFFSET=0x40000000
> > + default 0x5f000000 if PAGE_OFFSET=0x80000000
> > + default 0x9f000000 if PAGE_OFFSET=0xC0000000
> > + default 0xffffffff
> > +
> > config ARCH_SUPPORTS_UPROBES
> > def_bool y
> >
> > diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
> > index 83991a0447fa..efda24b00a44 100644
> > --- a/arch/arm/boot/compressed/Makefile
> > +++ b/arch/arm/boot/compressed/Makefile
> > @@ -25,6 +25,7 @@ endif
> >
> > GCOV_PROFILE := n
> > KASAN_SANITIZE := n
> > +CFLAGS_KERNEL += -D__SANITIZE_ADDRESS__
> >
> > # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
> > KCOV_INSTRUMENT := n
> > diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> > index c35f893897e1..c8b36824189b 100644
> > --- a/drivers/firmware/efi/libstub/Makefile
> > +++ b/drivers/firmware/efi/libstub/Makefile
> > @@ -20,7 +20,8 @@ cflags-$(CONFIG_ARM64) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
> > -fpie $(DISABLE_STACKLEAK_PLUGIN)
> > cflags-$(CONFIG_ARM) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
> > -fno-builtin -fpic \
> > - $(call cc-option,-mno-single-pic-base)
> > + $(call cc-option,-mno-single-pic-base) \
> > + -D__SANITIZE_ADDRESS__
> >
>
> I am not too crazy about this need to unconditionally 'enable' KASAN
> on the command line like this, in order to be able to disable it again
> when CONFIG_KASAN=y.
>
> Could we instead add something like this at the top of
> arch/arm/boot/compressed/string.c?
>
> #ifdef CONFIG_KASAN
> #undef memcpy
> #undef memmove
> #undef memset
> void *__memcpy(void *__dest, __const void *__src, size_t __n) __alias(memcpy);
> void *__memmove(void *__dest, __const void *__src, size_t count)
> __alias(memmove);
> void *__memset(void *s, int c, size_t count) __alias(memset);
> #endif
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v7 7/7] ARM: Enable KASan for ARM
2020-04-10 10:45 ` Ard Biesheuvel
(?)
@ 2020-04-12 0:33 ` Linus Walleij
-1 siblings, 0 replies; 33+ messages in thread
From: Linus Walleij @ 2020-04-12 0:33 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Florian Fainelli, linux-arm-kernel, Andrey Ryabinin, Abbott Liu,
bcm-kernel-feedback-list, Alexander Potapenko, Dmitry Vyukov,
Jonathan Corbet, Russell King, Christoffer Dall, Marc Zyngier,
Arnd Bergmann, Nicolas Pitre, Vladimir Murzin, Kees Cook,
Jinbum Park, Alexandre Belloni, Daniel Lezcano,
Philippe Ombredanne, Rob Landley, Greg Kroah-Hartman,
Andrew Morton, Mark Rutland, Catalin Marinas, Masahiro Yamada,
Thomas Gleixner, Thomas Garnier, David Howells,
Geert Uytterhoeven, Andre Przywara, Julien Thierry, Andrew Jones,
Philip Derrin, Michal Hocko, Kirill A. Shutemov, kasan-dev,
Linux Doc Mailing List, Linux Kernel Mailing List, kvmarm,
Andrey Ryabinin
On Fri, Apr 10, 2020 at 12:45 PM Ard Biesheuvel <ardb@kernel.org> wrote:
> > +CFLAGS_KERNEL += -D__SANITIZE_ADDRESS__
(...)
> > - $(call cc-option,-mno-single-pic-base)
> > + $(call cc-option,-mno-single-pic-base) \
> > + -D__SANITIZE_ADDRESS__
>
> I am not too crazy about this need to unconditionally 'enable' KASAN
> on the command line like this, in order to be able to disable it again
> when CONFIG_KASAN=y.
>
> Could we instead add something like this at the top of
> arch/arm/boot/compressed/string.c?
>
> #ifdef CONFIG_KASAN
> #undef memcpy
> #undef memmove
> #undef memset
> void *__memcpy(void *__dest, __const void *__src, size_t __n) __alias(memcpy);
> void *__memmove(void *__dest, __const void *__src, size_t count)
> __alias(memmove);
> void *__memset(void *s, int c, size_t count) __alias(memset);
> #endif
I obviously missed this before I sent out my new version of the series.
It bothers me too.
I will try this approach when I prepare the next iteration.
Thanks a lot!
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v7 7/7] ARM: Enable KASan for ARM
@ 2020-04-12 0:33 ` Linus Walleij
0 siblings, 0 replies; 33+ messages in thread
From: Linus Walleij @ 2020-04-12 0:33 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Alexandre Belloni, Michal Hocko, Catalin Marinas,
Linux Kernel Mailing List, David Howells, Masahiro Yamada,
Andrey Ryabinin, Alexander Potapenko, kvmarm, Florian Fainelli,
Jonathan Corbet, Abbott Liu, Daniel Lezcano, Russell King,
kasan-dev, bcm-kernel-feedback-list, linux-arm-kernel,
Geert Uytterhoeven, Kees Cook, Arnd Bergmann, Marc Zyngier,
Andre Przywara, Philip Derrin, Jinbum Park, Thomas Gleixner,
Dmitry Vyukov, Nicolas Pitre, Greg Kroah-Hartman,
Linux Doc Mailing List, Thomas Garnier, Rob Landley,
Philippe Ombredanne, Andrew Morton, Andrey Ryabinin,
Kirill A. Shutemov
On Fri, Apr 10, 2020 at 12:45 PM Ard Biesheuvel <ardb@kernel.org> wrote:
> > +CFLAGS_KERNEL += -D__SANITIZE_ADDRESS__
(...)
> > - $(call cc-option,-mno-single-pic-base)
> > + $(call cc-option,-mno-single-pic-base) \
> > + -D__SANITIZE_ADDRESS__
>
> I am not too crazy about this need to unconditionally 'enable' KASAN
> on the command line like this, in order to be able to disable it again
> when CONFIG_KASAN=y.
>
> Could we instead add something like this at the top of
> arch/arm/boot/compressed/string.c?
>
> #ifdef CONFIG_KASAN
> #undef memcpy
> #undef memmove
> #undef memset
> void *__memcpy(void *__dest, __const void *__src, size_t __n) __alias(memcpy);
> void *__memmove(void *__dest, __const void *__src, size_t count)
> __alias(memmove);
> void *__memset(void *s, int c, size_t count) __alias(memset);
> #endif
I obviously missed this before I sent out my new version of the series.
It bothers me too.
I will try this approach when I prepare the next iteration.
Thanks a lot!
Yours,
Linus Walleij
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v7 7/7] ARM: Enable KASan for ARM
@ 2020-04-12 0:33 ` Linus Walleij
0 siblings, 0 replies; 33+ messages in thread
From: Linus Walleij @ 2020-04-12 0:33 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Mark Rutland, Alexandre Belloni, Michal Hocko, Julien Thierry,
Catalin Marinas, Linux Kernel Mailing List, David Howells,
Masahiro Yamada, Andrey Ryabinin, Alexander Potapenko, kvmarm,
Florian Fainelli, Jonathan Corbet, Abbott Liu, Daniel Lezcano,
Russell King, kasan-dev, bcm-kernel-feedback-list,
linux-arm-kernel, Geert Uytterhoeven, Andrew Jones,
Vladimir Murzin, Kees Cook, Arnd Bergmann, Marc Zyngier,
Andre Przywara, Philip Derrin, Jinbum Park, Thomas Gleixner,
Dmitry Vyukov, Nicolas Pitre, Greg Kroah-Hartman,
Linux Doc Mailing List, Christoffer Dall, Thomas Garnier,
Rob Landley, Philippe Ombredanne, Andrew Morton, Andrey Ryabinin,
Kirill A. Shutemov
On Fri, Apr 10, 2020 at 12:45 PM Ard Biesheuvel <ardb@kernel.org> wrote:
> > +CFLAGS_KERNEL += -D__SANITIZE_ADDRESS__
(...)
> > - $(call cc-option,-mno-single-pic-base)
> > + $(call cc-option,-mno-single-pic-base) \
> > + -D__SANITIZE_ADDRESS__
>
> I am not too crazy about this need to unconditionally 'enable' KASAN
> on the command line like this, in order to be able to disable it again
> when CONFIG_KASAN=y.
>
> Could we instead add something like this at the top of
> arch/arm/boot/compressed/string.c?
>
> #ifdef CONFIG_KASAN
> #undef memcpy
> #undef memmove
> #undef memset
> void *__memcpy(void *__dest, __const void *__src, size_t __n) __alias(memcpy);
> void *__memmove(void *__dest, __const void *__src, size_t count)
> __alias(memmove);
> void *__memset(void *s, int c, size_t count) __alias(memset);
> #endif
I obviously missed this before I sent out my new version of the series.
It bothers me too.
I will try this approach when I prepare the next iteration.
Thanks a lot!
Yours,
Linus Walleij
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 33+ messages in thread
end of thread, other threads:[~2020-04-12 0:33 UTC | newest]
Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-17 22:48 [PATCH v7 0/7] KASan for arm Florian Fainelli
2020-01-17 22:48 ` Florian Fainelli
2020-01-17 22:48 ` Florian Fainelli
2020-01-17 22:48 ` [PATCH v7 1/7] ARM: Moved CP15 definitions from kvm_hyp.h to cp15.h Florian Fainelli
2020-01-17 22:48 ` Florian Fainelli
2020-01-17 22:48 ` Florian Fainelli
2020-01-17 22:48 ` [PATCH v7 2/7] ARM: Add TTBR operator for kasan_init Florian Fainelli
2020-01-17 22:48 ` Florian Fainelli
2020-01-17 22:48 ` Florian Fainelli
2020-01-17 22:48 ` [PATCH v7 3/7] ARM: Disable instrumentation for some code Florian Fainelli
2020-01-17 22:48 ` Florian Fainelli
2020-01-17 22:48 ` Florian Fainelli
2020-01-17 22:48 ` [PATCH v7 4/7] ARM: Replace memory function for kasan Florian Fainelli
2020-01-17 22:48 ` Florian Fainelli
2020-01-17 22:48 ` Florian Fainelli
2020-01-17 22:48 ` [PATCH v7 5/7] ARM: Define the virtual space of KASan's shadow region Florian Fainelli
2020-01-17 22:48 ` Florian Fainelli
2020-01-17 22:48 ` Florian Fainelli
2020-01-17 22:48 ` [PATCH v7 6/7] ARM: Initialize the mapping of KASan shadow memory Florian Fainelli
2020-01-17 22:48 ` Florian Fainelli
2020-01-17 22:48 ` Florian Fainelli
2020-01-17 22:48 ` [PATCH v7 7/7] ARM: Enable KASan for ARM Florian Fainelli
2020-01-17 22:48 ` Florian Fainelli
2020-01-17 22:48 ` Florian Fainelli
2020-04-10 10:45 ` Ard Biesheuvel
2020-04-10 10:45 ` Ard Biesheuvel
2020-04-10 10:45 ` Ard Biesheuvel
2020-04-10 10:47 ` Ard Biesheuvel
2020-04-10 10:47 ` Ard Biesheuvel
2020-04-10 10:47 ` Ard Biesheuvel
2020-04-12 0:33 ` Linus Walleij
2020-04-12 0:33 ` Linus Walleij
2020-04-12 0:33 ` Linus Walleij
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.