From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE45FC433DB for ; Wed, 3 Feb 2021 12:00:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0D7DD64F72 for ; Wed, 3 Feb 2021 12:00:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0D7DD64F72 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8CC5F8D0001; Wed, 3 Feb 2021 07:00:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 87C6C6B007D; Wed, 3 Feb 2021 07:00:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7468E8D0001; Wed, 3 Feb 2021 07:00:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0080.hostedemail.com [216.40.44.80]) by kanga.kvack.org (Postfix) with ESMTP id 539936B007B for ; Wed, 3 Feb 2021 07:00:23 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 15915181AEF21 for ; Wed, 3 Feb 2021 12:00:23 +0000 (UTC) X-FDA: 77776813926.28.root14_430ed6b275d3 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id D30B06C21 for ; Wed, 3 Feb 2021 12:00:22 +0000 (UTC) X-HE-Tag: root14_430ed6b275d3 X-Filterd-Recvd-Size: 27783 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Feb 2021 12:00:20 +0000 (UTC) Received: by mail-pf1-f173.google.com with SMTP id b145so9707532pfb.4 for ; Wed, 03 Feb 2021 04:00:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Vg7QYBwWip9gQvpQvXWGchzToJiam+vpYtkWhTt8NPw=; b=ESsTLaVkcDdzydHFbfucbRs/IdiuP2Xu18SGRAOh+C/QbxUdL8QWY4PLJ4McmYeR1K BM/O2JEB/WN/AvP7qYj1GR5dA/8IRs2ec71JxQu7kRtWxuehYVdSlOneGa9wA0mpyyky olnpoj9K09+bZoZONG1BRbsDDrN0ephDqMkkw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Vg7QYBwWip9gQvpQvXWGchzToJiam+vpYtkWhTt8NPw=; b=h/rdHZ2VhJpe8zB6Rsksr4V6vyTdGx0g74cYzxqbRGEprxqt8o4kn8dQLGdQXGMR8B wc08LYk/BywNuzzyVJDwDRIfsW8VCnQbVyHWJ02uyqIN0SM2Ly3ui5k86+pEtFT05pTr 1RwCIELAxAiZEzo/gKhxYUSh4z+YgVzqxWf5y2GwILzPIdtQTRUzN3MoPBHOWosXmoup 43Sv24GUHlB5mrO23VccfV6BTnBbkOIqcmyHtTOHesqpj/oE7RmovChmYenD7Wi3ohEN kqFTFOq2tCs414OXPL40biE/lQ9BE+ro+pPQ2MGuTqGyRZponvDQb2qkrK4StEUI2M2m 4TVw== X-Gm-Message-State: AOAM533vxwRtPhP0IcTCpa81vsZqK3rqZ+WCKwrcGiBIAAApr5U0RHHr 6i579pmz0B1dybZCREQ6zH1wiw== X-Google-Smtp-Source: ABdhPJwMEdSShpQiN47WOVQV+Q3tNz2xJdxYBkQjm4FeP2BSpFN63DvpwFkv6f3Nzw6iJjEiQpTNOg== X-Received: by 2002:a63:ca49:: with SMTP id o9mr3210402pgi.421.1612353619348; Wed, 03 Feb 2021 04:00:19 -0800 (PST) Received: from localhost (2001-44b8-1113-6700-1c59-4eca-f876-fd51.static.ipv6.internode.on.net. [2001:44b8:1113:6700:1c59:4eca:f876:fd51]) by smtp.gmail.com with ESMTPSA id n128sm2615881pga.55.2021.02.03.04.00.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Feb 2021 04:00:18 -0800 (PST) From: Daniel Axtens To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, kasan-dev@googlegroups.com, christophe.leroy@csgroup.eu, aneesh.kumar@linux.ibm.com, bsingharora@gmail.com Cc: Daniel Axtens Subject: [PATCH v10 6/6] powerpc: Book3S 64-bit outline-only KASAN support Date: Wed, 3 Feb 2021 22:59:46 +1100 Message-Id: <20210203115946.663273-7-dja@axtens.net> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210203115946.663273-1-dja@axtens.net> References: <20210203115946.663273-1-dja@axtens.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Implement a limited form of KASAN for Book3S 64-bit machines running unde= r the Radix MMU, supporting only outline mode. - Enable the compiler instrumentation to check addresses and maintain th= e shadow region. (This is the guts of KASAN which we can easily reuse.) - Require kasan-vmalloc support to handle modules and anything else in vmalloc space. - KASAN needs to be able to validate all pointer accesses, but we can't instrument all kernel addresses - only linear map and vmalloc. On boot= , set up a single page of read-only shadow that marks all iomap and vmemmap accesses as valid. - Make our stack-walking code KASAN-safe by using READ_ONCE_NOCHECK - generic code, arm64, s390 and x86 all do this for similar sorts of reasons: when unwinding a stack, we might touch memory that KASAN has marked as being out-of-bounds. In our case we often get this when checking for an exception frame because we're checking an arbitrary offset into the stack frame. See commit 20955746320e ("s390/kasan: avoid false positives during sta= ck unwind"), commit bcaf669b4bdb ("arm64: disable kasan when accessing frame->fp in unwind_frame"), commit 91e08ab0c851 ("x86/dumpstack: Prevent KASAN false positive warnings") and commit 6e22c8366416 ("tracing, kasan: Silence Kasan warning in check_stack of stack_tracer= ") - Document KASAN in both generic and powerpc docs. Background ---------- KASAN support on Book3S is a bit tricky to get right: - It would be good to support inline instrumentation so as to be able to catch stack issues that cannot be caught with outline mode. - Inline instrumentation requires a fixed offset. - Book3S runs code with translations off ("real mode") during boot, including a lot of generic device-tree parsing code which is used to determine MMU features. [ppc64 mm note: The kernel installs a linear mapping at effective address c000...-c008.... This is a one-to-one mapping with physical memory from 0000... onward. Because of how memory accesses work on powerpc 64-bit Book3S, a kernel pointer in the linear map accesses th= e same memory both with translations on (accessing as an 'effective address'), and with translations off (accessing as a 'real address'). This works in both guests and the hypervisor. For more details, see s5.7 of Book III of version 3 of the ISA, in particular the Storage Control Overview, s5.7.3, and s5.7.5 - noting that this KASAN implementation currently only supports Radix.] - Some code - most notably a lot of KVM code - also runs with translatio= ns off after boot. - Therefore any offset has to point to memory that is valid with translations on or off. One approach is just to give up on inline instrumentation. This way boot-time checks can be delayed until after the MMU is set is up, and we can just not instrument any code that runs with translations off after booting. Take this approach for now and require outline instrumentation. Previous attempts allowed inline instrumentation. However, they came with some unfortunate restrictions: only physically contiguous memory could be used and it had to be specified at compile time. Maybe we can do better i= n the future. Cc: Balbir Singh # ppc64 out-of-line radix versio= n Cc: Aneesh Kumar K.V # ppc64 hash version Cc: Christophe Leroy # ppc32 version Signed-off-by: Daniel Axtens --- Documentation/dev-tools/kasan.rst | 9 +- Documentation/powerpc/kasan.txt | 48 +++++++++- arch/powerpc/Kconfig | 4 +- arch/powerpc/Kconfig.debug | 3 +- arch/powerpc/include/asm/book3s/64/hash.h | 4 + arch/powerpc/include/asm/book3s/64/pgtable.h | 4 + arch/powerpc/include/asm/book3s/64/radix.h | 13 ++- arch/powerpc/include/asm/kasan.h | 22 +++++ arch/powerpc/kernel/Makefile | 11 +++ arch/powerpc/kernel/process.c | 16 ++-- arch/powerpc/kvm/Makefile | 5 ++ arch/powerpc/mm/book3s64/Makefile | 9 ++ arch/powerpc/mm/kasan/Makefile | 1 + arch/powerpc/mm/kasan/init_book3s_64.c | 95 ++++++++++++++++++++ arch/powerpc/mm/ptdump/ptdump.c | 20 ++++- arch/powerpc/platforms/Kconfig.cputype | 1 + arch/powerpc/platforms/powernv/Makefile | 6 ++ arch/powerpc/platforms/pseries/Makefile | 3 + 18 files changed, 256 insertions(+), 18 deletions(-) create mode 100644 arch/powerpc/mm/kasan/init_book3s_64.c diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/= kasan.rst index 9cfc116cc6bf..8c5d0c5c35f6 100644 --- a/Documentation/dev-tools/kasan.rst +++ b/Documentation/dev-tools/kasan.rst @@ -22,8 +22,9 @@ out-of-bounds accesses for global variables is only sup= ported since Clang 11. Tag-based KASAN is only supported in Clang. =20 Currently generic KASAN is supported for the x86_64, arm, arm64, xtensa,= s390 -and riscv architectures. It is also supported on 32-bit powerpc kernels. -Tag-based KASAN modes are supported only for arm64. +and riscv architectures. It is also supported on powerpc for 32-bit powe= rpc +kernels and for 64-bit kernels running under the Radix MMU. Tag-based KA= SAN +modes are supported only for arm64. =20 Usage ----- @@ -334,8 +335,8 @@ CONFIG_KASAN_VMALLOC =20 With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the cost of greater memory usage. Currently this supported on x86, s390 -and 32-bit powerpc. It is optional, except on 32-bit powerpc kernels -with module support, where it is required. +and powerpc. It is optional, except on 64-bit powerpc kernels, and on +32-bit powerpc kernels with module support, where it is required. =20 This works by hooking into vmalloc and vmap, and dynamically allocating real shadow memory to back the mappings. diff --git a/Documentation/powerpc/kasan.txt b/Documentation/powerpc/kasa= n.txt index 26bb0e8bb18c..f032b4eaf205 100644 --- a/Documentation/powerpc/kasan.txt +++ b/Documentation/powerpc/kasan.txt @@ -1,4 +1,4 @@ -KASAN is supported on powerpc on 32-bit only. +KASAN is supported on powerpc on 32-bit and Radix 64-bit only. =20 32 bit support =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D @@ -10,3 +10,49 @@ fixmap area and occupies one eighth of the total kerne= l virtual memory space. =20 Instrumentation of the vmalloc area is optional, unless built with modul= es, in which case it is required. + +64 bit support +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +Currently, only the radix MMU is supported. There have been versions for= hash +and Book3E processors floating around on the mailing list, but nothing h= as been +merged. + +KASAN support on Book3S is a bit tricky to get right: + + - It would be good to support inline instrumentation so as to be able t= o catch + stack issues that cannot be caught with outline mode. + + - Inline instrumentation requires a fixed offset. + + - Book3S runs code with translations off ("real mode") during boot, inc= luding a + lot of generic device-tree parsing code which is used to determine MM= U + features. + + - Some code - most notably a lot of KVM code - also runs with translati= ons off + after boot. + + - Therefore any offset has to point to memory that is valid with + translations on or off. + +One approach is just to give up on inline instrumentation. This way boot= -time +checks can be delayed until after the MMU is set is up, and we can just = not +instrument any code that runs with translations off after booting. This = is the +current approach. + +To avoid this limitiation, the KASAN shadow would have to be placed insi= de the +linear mapping, using the same high-bits trick we use for the rest of th= e linear +mapping. This is tricky: + + - We'd like to place it near the start of physical memory. In theory we= can do + this at run-time based on how much physical memory we have, but this = requires + being able to arbitrarily relocate the kernel, which is basically the= tricky + part of KASLR. Not being game to implement both tricky things at once= , this + is hopefully something we can revisit once we get KASLR for Book3S. + + - Alternatively, we can place the shadow at the _end_ of memory, but th= is + requires knowing how much contiguous physical memory a system has _at= compile + time_. This is a big hammer, and has some unfortunate consequences: i= nablity + to handle discontiguous physical memory, total failure to boot on mac= hines + with less memory than specified, and that machines with more memory t= han + specified can't use it. This was deemed unacceptable. diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 9141f03060ce..a1f359ec2fa1 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -118,6 +118,7 @@ config PPC # Please keep this list sorted alphabetically. # select ARCH_32BIT_OFF_T if PPC32 + select ARCH_DISABLE_KASAN_INLINE if PPC_RADIX_MMU select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_HAS_ELF_RANDOMIZE @@ -183,7 +184,8 @@ config PPC select HAVE_ARCH_HUGE_VMAP if PPC_BOOK3S_64 && PPC_RADIX_MMU select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <=3D 14 - select HAVE_ARCH_KASAN_VMALLOC if PPC32 && PPC_PAGE_SHIFT <=3D 14 + select HAVE_ARCH_KASAN if PPC_RADIX_MMU + select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug index b88900f4832f..33e4cc8c4306 100644 --- a/arch/powerpc/Kconfig.debug +++ b/arch/powerpc/Kconfig.debug @@ -397,4 +397,5 @@ config PPC_FAST_ENDIAN_SWITCH config KASAN_SHADOW_OFFSET hex depends on KASAN - default 0xe0000000 + default 0xe0000000 if PPC32 + default 0xa80e000000000000 if PPC64 diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/inc= lude/asm/book3s/64/hash.h index d959b0195ad9..222669864ff6 100644 --- a/arch/powerpc/include/asm/book3s/64/hash.h +++ b/arch/powerpc/include/asm/book3s/64/hash.h @@ -18,6 +18,10 @@ #include #endif =20 +#define H_PTRS_PER_PTE (1 << H_PTE_INDEX_SIZE) +#define H_PTRS_PER_PMD (1 << H_PMD_INDEX_SIZE) +#define H_PTRS_PER_PUD (1 << H_PUD_INDEX_SIZE) + /* Bits to set in a PMD/PUD/PGD entry valid bit*/ #define HASH_PMD_VAL_BITS (0x8000000000000000UL) #define HASH_PUD_VAL_BITS (0x8000000000000000UL) diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/= include/asm/book3s/64/pgtable.h index a39886681629..891084efdad7 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -230,6 +230,10 @@ extern unsigned long __pmd_frag_size_shift; #define PTRS_PER_PUD (1 << PUD_INDEX_SIZE) #define PTRS_PER_PGD (1 << PGD_INDEX_SIZE) =20 +#define MAX_PTRS_PER_PTE ((H_PTRS_PER_PTE > R_PTRS_PER_PTE) ? H_PTRS_PER= _PTE : R_PTRS_PER_PTE) +#define MAX_PTRS_PER_PMD ((H_PTRS_PER_PMD > R_PTRS_PER_PMD) ? H_PTRS_PER= _PMD : R_PTRS_PER_PMD) +#define MAX_PTRS_PER_PUD ((H_PTRS_PER_PUD > R_PTRS_PER_PUD) ? H_PTRS_PER= _PUD : R_PTRS_PER_PUD) + /* PMD_SHIFT determines what a second-level page table entry can map */ #define PMD_SHIFT (PAGE_SHIFT + PTE_INDEX_SIZE) #define PMD_SIZE (1UL << PMD_SHIFT) diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/in= clude/asm/book3s/64/radix.h index c7813dc628fc..b3492b80f858 100644 --- a/arch/powerpc/include/asm/book3s/64/radix.h +++ b/arch/powerpc/include/asm/book3s/64/radix.h @@ -35,6 +35,11 @@ #define RADIX_PMD_SHIFT (PAGE_SHIFT + RADIX_PTE_INDEX_SIZE) #define RADIX_PUD_SHIFT (RADIX_PMD_SHIFT + RADIX_PMD_INDEX_SIZE) #define RADIX_PGD_SHIFT (RADIX_PUD_SHIFT + RADIX_PUD_INDEX_SIZE) + +#define R_PTRS_PER_PTE (1 << RADIX_PTE_INDEX_SIZE) +#define R_PTRS_PER_PMD (1 << RADIX_PMD_INDEX_SIZE) +#define R_PTRS_PER_PUD (1 << RADIX_PUD_INDEX_SIZE) + /* * Size of EA range mapped by our pagetables. */ @@ -68,11 +73,11 @@ * * * 3rd quadrant expanded: - * +------------------------------+ + * +------------------------------+ Highest address (0xc010000000000000= ) + * +------------------------------+ KASAN shadow end (0xc00fc0000000000= 0) * | | * | | - * | | - * +------------------------------+ Kernel vmemmap end (0xc010000000000= 000) + * +------------------------------+ Kernel vmemmap end/shadow start (0x= c00e000000000000) * | | * | 512TB | * | | @@ -126,6 +131,8 @@ #define RADIX_VMEMMAP_SIZE RADIX_KERN_MAP_SIZE #define RADIX_VMEMMAP_END (RADIX_VMEMMAP_START + RADIX_VMEMMAP_SIZE) =20 +/* For the sizes of the shadow area, see kasan.h */ + #ifndef __ASSEMBLY__ #define RADIX_PTE_TABLE_SIZE (sizeof(pte_t) << RADIX_PTE_INDEX_SIZE) #define RADIX_PMD_TABLE_SIZE (sizeof(pmd_t) << RADIX_PMD_INDEX_SIZE) diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/= kasan.h index 7355ed05e65e..fdcce89c2515 100644 --- a/arch/powerpc/include/asm/kasan.h +++ b/arch/powerpc/include/asm/kasan.h @@ -30,7 +30,29 @@ =20 #define KASAN_SHADOW_OFFSET ASM_CONST(CONFIG_KASAN_SHADOW_OFFSET) =20 +#ifdef CONFIG_PPC32 #define KASAN_SHADOW_END (-(-KASAN_SHADOW_START >> KASAN_SHADOW_SCALE_SH= IFT)) +#endif + +#ifdef CONFIG_PPC_BOOK3S_64 +/* + * The shadow ends before the highest accessible address + * because we don't need a shadow for the shadow. Instead: + * c00e000000000000 << 3 + a80e000000000000000 =3D c00fc00000000000 + */ +#define KASAN_SHADOW_END 0xc00fc00000000000UL + +DECLARE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key); + +static __always_inline bool kasan_arch_is_ready(void) +{ + if (static_branch_likely(&powerpc_kasan_enabled_key)) + return true; + return false; +} + +#define kasan_arch_is_ready kasan_arch_is_ready +#endif =20 #ifdef CONFIG_KASAN void kasan_early_init(void); diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile index fe2ef598e2ea..a0cbe8953238 100644 --- a/arch/powerpc/kernel/Makefile +++ b/arch/powerpc/kernel/Makefile @@ -32,6 +32,17 @@ KASAN_SANITIZE_early_32.o :=3D n KASAN_SANITIZE_cputable.o :=3D n KASAN_SANITIZE_prom_init.o :=3D n KASAN_SANITIZE_btext.o :=3D n +KASAN_SANITIZE_paca.o :=3D n +KASAN_SANITIZE_setup_64.o :=3D n +KASAN_SANITIZE_mce.o :=3D n +KASAN_SANITIZE_mce_power.o :=3D n + +# we have to be particularly careful in ppc64 to exclude code that +# runs with translations off, as we cannot access the shadow with +# translations off. However, ppc32 can sanitize this. +ifdef CONFIG_PPC64 +KASAN_SANITIZE_traps.o :=3D n +endif =20 ifdef CONFIG_KASAN CFLAGS_early_32.o +=3D -DDISABLE_BRANCH_PROFILING diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.= c index a66f435dabbf..9a6fd603f0e7 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -2157,8 +2157,8 @@ void show_stack(struct task_struct *tsk, unsigned l= ong *stack, break; =20 stack =3D (unsigned long *) sp; - newsp =3D stack[0]; - ip =3D stack[STACK_FRAME_LR_SAVE]; + newsp =3D READ_ONCE_NOCHECK(stack[0]); + ip =3D READ_ONCE_NOCHECK(stack[STACK_FRAME_LR_SAVE]); if (!firstframe || ip !=3D lr) { printk("%s["REG"] ["REG"] %pS", loglvl, sp, ip, (void *)ip); @@ -2176,17 +2176,19 @@ void show_stack(struct task_struct *tsk, unsigned= long *stack, * See if this is an exception frame. * We look for the "regshere" marker in the current frame. */ - if (validate_sp(sp, tsk, STACK_INT_FRAME_SIZE) - && stack[STACK_FRAME_MARKER] =3D=3D STACK_FRAME_REGS_MARKER) { + if (validate_sp(sp, tsk, STACK_INT_FRAME_SIZE) && + (READ_ONCE_NOCHECK(stack[STACK_FRAME_MARKER]) =3D=3D + STACK_FRAME_REGS_MARKER)) { struct pt_regs *regs =3D (struct pt_regs *) (sp + STACK_FRAME_OVERHEAD); =20 - lr =3D regs->link; + lr =3D READ_ONCE_NOCHECK(regs->link); printk("%s--- interrupt: %lx at %pS\n", - loglvl, regs->trap, (void *)regs->nip); + loglvl, READ_ONCE_NOCHECK(regs->trap), + (void *)READ_ONCE_NOCHECK(regs->nip)); __show_regs(regs); printk("%s--- interrupt: %lx\n", - loglvl, regs->trap); + loglvl, READ_ONCE_NOCHECK(regs->trap)); =20 firstframe =3D 1; } diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile index 2bfeaa13befb..7f1592dacbeb 100644 --- a/arch/powerpc/kvm/Makefile +++ b/arch/powerpc/kvm/Makefile @@ -136,3 +136,8 @@ obj-$(CONFIG_KVM_BOOK3S_64_PR) +=3D kvm-pr.o obj-$(CONFIG_KVM_BOOK3S_64_HV) +=3D kvm-hv.o =20 obj-y +=3D $(kvm-book3s_64-builtin-objs-y) + +# KVM does a lot in real-mode, and 64-bit Book3S KASAN doesn't support t= hat +ifdef CONFIG_PPC_BOOK3S_64 +KASAN_SANITIZE :=3D n +endif diff --git a/arch/powerpc/mm/book3s64/Makefile b/arch/powerpc/mm/book3s64= /Makefile index 1b56d3af47d4..a7d8a68bd2c5 100644 --- a/arch/powerpc/mm/book3s64/Makefile +++ b/arch/powerpc/mm/book3s64/Makefile @@ -21,3 +21,12 @@ obj-$(CONFIG_PPC_PKEY) +=3D pkeys.o =20 # Instrumenting the SLB fault path can lead to duplicate SLB entries KCOV_INSTRUMENT_slb.o :=3D n + +# Parts of these can run in real mode and therefore are +# not safe with the current outline KASAN implementation +KASAN_SANITIZE_mmu_context.o :=3D n +KASAN_SANITIZE_pgtable.o :=3D n +KASAN_SANITIZE_radix_pgtable.o :=3D n +KASAN_SANITIZE_radix_tlb.o :=3D n +KASAN_SANITIZE_slb.o :=3D n +KASAN_SANITIZE_pkeys.o :=3D n diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makef= ile index 42fb628a44fd..07eef87abd6c 100644 --- a/arch/powerpc/mm/kasan/Makefile +++ b/arch/powerpc/mm/kasan/Makefile @@ -5,3 +5,4 @@ KASAN_SANITIZE :=3D n obj-$(CONFIG_PPC32) +=3D init_32.o obj-$(CONFIG_PPC_8xx) +=3D 8xx.o obj-$(CONFIG_PPC_BOOK3S_32) +=3D book3s_32.o +obj-$(CONFIG_PPC_BOOK3S_64) +=3D init_book3s_64.o diff --git a/arch/powerpc/mm/kasan/init_book3s_64.c b/arch/powerpc/mm/kas= an/init_book3s_64.c new file mode 100644 index 000000000000..ca913ed951a2 --- /dev/null +++ b/arch/powerpc/mm/kasan/init_book3s_64.c @@ -0,0 +1,95 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KASAN for 64-bit Book3S powerpc + * + * Copyright (C) 2019-2020 IBM Corporation + * Author: Daniel Axtens + */ + +#define DISABLE_BRANCH_PROFILING + +#include +#include +#include +#include +#include + +DEFINE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key); + +static void __init kasan_init_phys_region(void *start, void *end) +{ + unsigned long k_start, k_end, k_cur; + void *va; + + if (start >=3D end) + return; + + k_start =3D ALIGN_DOWN((unsigned long)kasan_mem_to_shadow(start), PAGE_= SIZE); + k_end =3D ALIGN((unsigned long)kasan_mem_to_shadow(end), PAGE_SIZE); + + va =3D memblock_alloc(k_end - k_start, PAGE_SIZE); + for (k_cur =3D k_start; k_cur < k_end; k_cur +=3D PAGE_SIZE, va +=3D PA= GE_SIZE) + map_kernel_page(k_cur, __pa(va), PAGE_KERNEL); +} + +void __init kasan_init(void) +{ + /* + * We want to do the following things: + * 1) Map real memory into the shadow for all physical memblocks + * This takes us from c000... to c008... + * 2) Leave a hole over the shadow of vmalloc space. KASAN_VMALLOC + * will manage this for us. + * This takes us from c008... to c00a... + * 3) Map the 'early shadow'/zero page over iomap and vmemmap space. + * This takes us up to where we start at c00e... + */ + + void *k_start =3D kasan_mem_to_shadow((void *)RADIX_VMALLOC_END); + void *k_end =3D kasan_mem_to_shadow((void *)RADIX_VMEMMAP_END); + phys_addr_t start, end; + u64 i; + pte_t zero_pte =3D pfn_pte(virt_to_pfn(kasan_early_shadow_page), PAGE_K= ERNEL); + + if (!early_radix_enabled()) + panic("KASAN requires radix!"); + + for_each_mem_range(i, &start, &end) + kasan_init_phys_region((void *)start, (void *)end); + + for (i =3D 0; i < PTRS_PER_PTE; i++) + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page, + &kasan_early_shadow_pte[i], zero_pte, 0); + + for (i =3D 0; i < PTRS_PER_PMD; i++) + pmd_populate_kernel(&init_mm, &kasan_early_shadow_pmd[i], + kasan_early_shadow_pte); + + for (i =3D 0; i < PTRS_PER_PUD; i++) + pud_populate(&init_mm, &kasan_early_shadow_pud[i], + kasan_early_shadow_pmd); + + /* map the early shadow over the iomap and vmemmap space */ + kasan_populate_early_shadow(k_start, k_end); + + /* mark early shadow region as RO and wipe it */ + zero_pte =3D pfn_pte(virt_to_pfn(kasan_early_shadow_page), PAGE_KERNEL_= RO); + for (i =3D 0; i < PTRS_PER_PTE; i++) + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page, + &kasan_early_shadow_pte[i], zero_pte, 0); + + /* + * clear_page relies on some cache info that hasn't been set up yet. + * It ends up looping ~forever and blows up other data. + * Use memset instead. + */ + memset(kasan_early_shadow_page, 0, PAGE_SIZE); + + static_branch_inc(&powerpc_kasan_enabled_key); + + /* Enable error messages */ + init_task.kasan_depth =3D 0; + pr_info("KASAN init done (64-bit Book3S)\n"); +} + +void __init kasan_late_init(void) { } diff --git a/arch/powerpc/mm/ptdump/ptdump.c b/arch/powerpc/mm/ptdump/ptd= ump.c index aca354fb670b..63672aa656e8 100644 --- a/arch/powerpc/mm/ptdump/ptdump.c +++ b/arch/powerpc/mm/ptdump/ptdump.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include =20 @@ -317,6 +318,23 @@ static void walk_pud(struct pg_state *st, p4d_t *p4d= , unsigned long start) unsigned long addr; unsigned int i; =20 +#if defined(CONFIG_KASAN) && defined(CONFIG_PPC_BOOK3S_64) + /* + * On radix + KASAN, we want to check for the KASAN "early" shadow + * which covers huge quantities of memory with the same set of + * read-only PTEs. If it is, we want to note the first page (to see + * the status change), and then note the last page. This gives us good + * results without spending ages noting the exact same PTEs over 100s o= f + * terabytes of memory. + */ + if (p4d_page(*p4d) =3D=3D virt_to_page(lm_alias(kasan_early_shadow_pud)= )) { + walk_pmd(st, pud, start); + addr =3D start + (PTRS_PER_PUD - 1) * PUD_SIZE; + walk_pmd(st, pud, addr); + return; + } +#endif + for (i =3D 0; i < PTRS_PER_PUD; i++, pud++) { addr =3D start + i * PUD_SIZE; if (!pud_none(*pud) && !pud_is_leaf(*pud)) @@ -387,11 +405,11 @@ static void populate_markers(void) #endif address_markers[i++].start_address =3D FIXADDR_START; address_markers[i++].start_address =3D FIXADDR_TOP; +#endif /* CONFIG_PPC64 */ #ifdef CONFIG_KASAN address_markers[i++].start_address =3D KASAN_SHADOW_START; address_markers[i++].start_address =3D KASAN_SHADOW_END; #endif -#endif /* CONFIG_PPC64 */ } =20 static int ptdump_show(struct seq_file *m, void *v) diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platfo= rms/Kconfig.cputype index 3ce907523b1e..9063c13e7221 100644 --- a/arch/powerpc/platforms/Kconfig.cputype +++ b/arch/powerpc/platforms/Kconfig.cputype @@ -101,6 +101,7 @@ config PPC_BOOK3S_64 select ARCH_SUPPORTS_NUMA_BALANCING select IRQ_WORK select PPC_MM_SLICES + select KASAN_VMALLOC if KASAN =20 config PPC_BOOK3E_64 bool "Embedded processors" diff --git a/arch/powerpc/platforms/powernv/Makefile b/arch/powerpc/platf= orms/powernv/Makefile index 2eb6ae150d1f..f277e4793696 100644 --- a/arch/powerpc/platforms/powernv/Makefile +++ b/arch/powerpc/platforms/powernv/Makefile @@ -1,4 +1,10 @@ # SPDX-License-Identifier: GPL-2.0 + +# nothing that deals with real mode is safe to KASAN +# in particular, idle code runs a bunch of things in real mode +KASAN_SANITIZE_idle.o :=3D n +KASAN_SANITIZE_pci-ioda.o :=3D n + obj-y +=3D setup.o opal-call.o opal-wrappers.o opal.o opal-async.o obj-y +=3D idle.o opal-rtc.o opal-nvram.o opal-lpc.o opal-flash.o obj-y +=3D rng.o opal-elog.o opal-dump.o opal-sysparam.o opal-sensor.o diff --git a/arch/powerpc/platforms/pseries/Makefile b/arch/powerpc/platf= orms/pseries/Makefile index c8a2b0b05ac0..202199ef9e5c 100644 --- a/arch/powerpc/platforms/pseries/Makefile +++ b/arch/powerpc/platforms/pseries/Makefile @@ -30,3 +30,6 @@ obj-$(CONFIG_PPC_SVM) +=3D svm.o obj-$(CONFIG_FA_DUMP) +=3D rtas-fadump.o =20 obj-$(CONFIG_SUSPEND) +=3D suspend.o + +# nothing that operates in real mode is safe for KASAN +KASAN_SANITIZE_ras.o :=3D n --=20 2.27.0