From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-22.4 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70565CA9EC5 for ; Wed, 30 Oct 2019 14:23:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D04682087E for ; Wed, 30 Oct 2019 14:23:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="p2jfFEQx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D04682087E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8027B6B026A; Wed, 30 Oct 2019 10:23:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7B0026B026B; Wed, 30 Oct 2019 10:23:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5DD316B026C; Wed, 30 Oct 2019 10:23:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0180.hostedemail.com [216.40.44.180]) by kanga.kvack.org (Postfix) with ESMTP id 18E236B026A for ; Wed, 30 Oct 2019 10:23:15 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 91CEE582A for ; Wed, 30 Oct 2019 14:23:14 +0000 (UTC) X-FDA: 76100668308.22.dad97_392b98cf3c62b X-HE-Tag: dad97_392b98cf3c62b X-Filterd-Recvd-Size: 90311 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Wed, 30 Oct 2019 14:23:12 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id j14so1409128wrm.6 for ; Wed, 30 Oct 2019 07:23:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=YrDN24N/WY4R1KLB0/1P51BwJDG2PjdreclFhfDNsA0=; b=p2jfFEQx87Ba0MAuO6OwoUwnZw2z9XnuUCcwIQtQEBY66q77RGwuSkx2BYID8m6VkC BDYcdlXxi3BSf/9QKb3Y+cFQBvuaGkDOWZnrfquWUjHYcEzO8A2JToVvN0hrG4ZVG7al HVqEJPMJYzGHWtT+FYKIIfJsFYlW+h68MLN2dKVBrC7OyZuO7JFsHy4p9/zR0Y13EUHH pyO1dgJQ4277ni2eZSH8QnlaUSlg+BFN5dwqhj+eMctzrCTlctSPZK+OY4kYf8UVM+4K mppjRteDB4rKoVYmR6EZiKzBh/3MqmdNfD73mL5v+FF2fiwZsXp5yMntuwEsYM1KskKK Oa1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=YrDN24N/WY4R1KLB0/1P51BwJDG2PjdreclFhfDNsA0=; b=Ad8EiwmRJvj3Aeg1xLZVaqmUqYjjwpL6nX4MgWXvUdsLah0cF4nj5TXj8aHt9bNQfr Nxwzx4+qrsni4LCO8V2KdkqAhLueAzccZZSa0TpUbRRLhTC5kDDqolXAD2+MwQ+fBYjC iRjc02Q4C1yXwCE1osOX/dDWsqIF7CdUNHGjXgGwPFp0XKosBhM6BDyJOtPBnZ9fQk+B oCJ00NBBcTZ9jVEW2qjBejmLPxsX/gAF9HQwrwzCL9TyQaciKAVv3DXwYnVsh7GLs/wp u8W8RUEorNtSgyuPCHGVpJuI+hGQIW8Ty49M4ybynECQSZoI/1T4ytfsXsp6PRB0KF7e jIxg== X-Gm-Message-State: APjAAAV0PUXXXCqSysFfJ2DjmN0RRoEXzhRVgX7azL21QbvgD99Fxa8f NiX2xpA6989a5G3XZ5jU0Ff60VIvk10= X-Google-Smtp-Source: APXvYqzdreH6r0EwP+YJD08T6OMJnrHBb/x3uatMy81DF1QdMmIjAqfuu3/MKPn3en3+Y3PEMuJ2k9zNSZc= X-Received: by 2002:a05:6000:14a:: with SMTP id r10mr119959wrx.310.1572445391211; Wed, 30 Oct 2019 07:23:11 -0700 (PDT) Date: Wed, 30 Oct 2019 15:22:21 +0100 In-Reply-To: <20191030142237.249532-1-glider@google.com> Message-Id: <20191030142237.249532-10-glider@google.com> Mime-Version: 1.0 References: <20191030142237.249532-1-glider@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH RFC v2 09/25] kmsan: add KMSAN runtime From: glider@google.com To: Vegard Nossum , Dmitry Vyukov , linux-mm@kvack.org Cc: viro@zeniv.linux.org.uk, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@lst.de, dmitry.torokhov@gmail.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, mingo@elte.hu, axboe@kernel.dk, martin.petersen@oracle.com, schwidefsky@de.ibm.com, mst@redhat.com, monstr@monstr.eu, pmladek@suse.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, wsa@the-dreams.de, gor@linux.ibm.com, iii@linux.ibm.com, mark.rutland@arm.com, willy@infradead.org, rdunlap@infradead.org, andreyknvl@google.com, elver@google.com, Alexander Potapenko Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch adds the KernelMemorySanitizer runtime and associated files: - arch/x86/include/asm/kmsan.h: assembly definitions for hooking interrupt handlers; - include/linux/kmsan-checks.h: user API to enable/disable KMSAN, poison/unpoison memory etc. - include/linux/kmsan.h: declarations of KMSAN memory hooks to be referenced outside KMSAN runtime - lib/Kconfig.kmsan: declarations for CONFIG_KMSAN and CONFIG_TEST_KMSAN - mm/kmsan/Makefile: boilerplate Makefile - mm/kmsan/kmsan.h: internal KMSAN declarations - mm/kmsan/kmsan.c: core functions that operate with shadow and origin memory and perform checks, utility functions - mm/kmsan/kmsan_entry.c: KMSAN hooks for entry_64.S - mm/kmsan/kmsan_hooks.c: KMSAN hooks for kernel subsystems - mm/kmsan/kmsan_init.c: KMSAN initialization routines - mm/kmsan/kmsan_instr.c: functions called by KMSAN instrumentation - scripts/Makefile.kmsan: CFLAGS_KMSAN Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: linux-mm@kvack.org --- v2: - dropped kmsan_handle_vprintk() - use locking for single kmsan_pr_err() calls - don't try to understand we're inside printk() Change-Id: I4b3a7aba6d5804afac4f5f7274cadf8675b6e119 --- arch/x86/Kconfig | 1 + arch/x86/include/asm/kmsan.h | 129 ++++++++ include/linux/kmsan-checks.h | 121 ++++++++ include/linux/kmsan.h | 143 +++++++++ lib/Kconfig.debug | 2 + lib/Kconfig.kmsan | 22 ++ mm/kmsan/Makefile | 4 + mm/kmsan/kmsan.c | 570 +++++++++++++++++++++++++++++++++++ mm/kmsan/kmsan.h | 149 +++++++++ mm/kmsan/kmsan_entry.c | 130 ++++++++ mm/kmsan/kmsan_hooks.c | 393 ++++++++++++++++++++++++ mm/kmsan/kmsan_init.c | 88 ++++++ mm/kmsan/kmsan_instr.c | 259 ++++++++++++++++ mm/kmsan/kmsan_report.c | 133 ++++++++ mm/kmsan/kmsan_shadow.c | 543 +++++++++++++++++++++++++++++++++ mm/kmsan/kmsan_shadow.h | 30 ++ scripts/Makefile.kmsan | 12 + 17 files changed, 2729 insertions(+) create mode 100644 arch/x86/include/asm/kmsan.h create mode 100644 include/linux/kmsan-checks.h create mode 100644 include/linux/kmsan.h create mode 100644 lib/Kconfig.kmsan create mode 100644 mm/kmsan/Makefile create mode 100644 mm/kmsan/kmsan.c create mode 100644 mm/kmsan/kmsan.h create mode 100644 mm/kmsan/kmsan_entry.c create mode 100644 mm/kmsan/kmsan_hooks.c create mode 100644 mm/kmsan/kmsan_init.c create mode 100644 mm/kmsan/kmsan_instr.c create mode 100644 mm/kmsan/kmsan_report.c create mode 100644 mm/kmsan/kmsan_shadow.c create mode 100644 mm/kmsan/kmsan_shadow.h create mode 100644 scripts/Makefile.kmsan diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index d6e1faa28c58..3f83a5c53808 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -135,6 +135,7 @@ config X86 select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if X86_64 + select HAVE_ARCH_KMSAN if X86_64 select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT diff --git a/arch/x86/include/asm/kmsan.h b/arch/x86/include/asm/kmsan.h new file mode 100644 index 000000000000..22322904102b --- /dev/null +++ b/arch/x86/include/asm/kmsan.h @@ -0,0 +1,129 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Assembly bits to safely invoke KMSAN hooks from .S files. + * + * Adopted from KTSAN assembly hooks implementation by Dmitry Vyukov: + * https://github.com/google/ktsan/blob/ktsan/arch/x86/include/asm/ktsan.h + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ +#ifndef _ASM_X86_KMSAN_H +#define _ASM_X86_KMSAN_H + +#ifdef CONFIG_KMSAN + +#define KMSAN_PUSH_REGS \ + pushq %rax; \ + pushq %rcx; \ + pushq %rdx; \ + pushq %rdi; \ + pushq %rsi; \ + pushq %r8; \ + pushq %r9; \ + pushq %r10; \ + pushq %r11; \ +/**/ + +#define KMSAN_POP_REGS \ + popq %r11; \ + popq %r10; \ + popq %r9; \ + popq %r8; \ + popq %rsi; \ + popq %rdi; \ + popq %rdx; \ + popq %rcx; \ + popq %rax; \ +/**/ + +#define KMSAN_INTERRUPT_ENTER \ + KMSAN_PUSH_REGS \ + call kmsan_interrupt_enter; \ + KMSAN_POP_REGS \ +/**/ + +#define KMSAN_INTERRUPT_EXIT \ + KMSAN_PUSH_REGS \ + call kmsan_interrupt_exit; \ + KMSAN_POP_REGS \ +/**/ + +#define KMSAN_SOFTIRQ_ENTER \ + KMSAN_PUSH_REGS \ + call kmsan_softirq_enter; \ + KMSAN_POP_REGS \ +/**/ + +#define KMSAN_SOFTIRQ_EXIT \ + KMSAN_PUSH_REGS \ + call kmsan_softirq_exit; \ + KMSAN_POP_REGS \ +/**/ + +#define KMSAN_NMI_ENTER \ + KMSAN_PUSH_REGS \ + call kmsan_nmi_enter; \ + KMSAN_POP_REGS \ +/**/ + +#define KMSAN_NMI_EXIT \ + KMSAN_PUSH_REGS \ + call kmsan_nmi_exit; \ + KMSAN_POP_REGS \ +/**/ + +#define KMSAN_SYSCALL_ENTER \ + KMSAN_PUSH_REGS \ + call kmsan_syscall_enter; \ + KMSAN_POP_REGS \ +/**/ + +#define KMSAN_SYSCALL_EXIT \ + KMSAN_PUSH_REGS \ + call kmsan_syscall_exit; \ + KMSAN_POP_REGS \ +/**/ + +#define KMSAN_IST_ENTER(shift_ist) \ + KMSAN_PUSH_REGS \ + movq $shift_ist, %rdi; \ + call kmsan_ist_enter; \ + KMSAN_POP_REGS \ +/**/ + +#define KMSAN_IST_EXIT(shift_ist) \ + KMSAN_PUSH_REGS \ + movq $shift_ist, %rdi; \ + call kmsan_ist_exit; \ + KMSAN_POP_REGS \ +/**/ + +#define KMSAN_UNPOISON_PT_REGS \ + KMSAN_PUSH_REGS \ + call kmsan_unpoison_pt_regs; \ + KMSAN_POP_REGS \ +/**/ + + +#else /* ifdef CONFIG_KMSAN */ + +#define KMSAN_INTERRUPT_ENTER +#define KMSAN_INTERRUPT_EXIT +#define KMSAN_SOFTIRQ_ENTER +#define KMSAN_SOFTIRQ_EXIT +#define KMSAN_NMI_ENTER +#define KMSAN_NMI_EXIT +#define KMSAN_SYSCALL_ENTER +#define KMSAN_SYSCALL_EXIT +#define KMSAN_IST_ENTER(shift_ist) +#define KMSAN_IST_EXIT(shift_ist) +#define KMSAN_UNPOISON_PT_REGS + +#endif /* ifdef CONFIG_KMSAN */ +#endif /* ifndef _ASM_X86_KMSAN_H */ diff --git a/include/linux/kmsan-checks.h b/include/linux/kmsan-checks.h new file mode 100644 index 000000000000..5c60540ba324 --- /dev/null +++ b/include/linux/kmsan-checks.h @@ -0,0 +1,121 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KMSAN checks. + * TODO(glider): unite with kmsan.h? + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#ifndef _LINUX_KMSAN_CHECKS_H +#define _LINUX_KMSAN_CHECKS_H + +#include +#include + +struct i2c_msg; +struct page; +struct sk_buff; +struct urb; + +#ifdef CONFIG_KMSAN + +/* + * Helper functions that mark the return value initialized. + * Note that Clang ignores the inline attribute in the cases when a no_san= itize + * function is called from an instrumented one. + */ + +__no_sanitize_memory +static inline unsigned char KMSAN_INIT_1(unsigned char value) +{ + return value; +} + +__no_sanitize_memory +static inline unsigned short KMSAN_INIT_2(unsigned short value) +{ + return value; +} + +__no_sanitize_memory +static inline unsigned int KMSAN_INIT_4(unsigned int value) +{ + return value; +} + +__no_sanitize_memory +static inline unsigned long KMSAN_INIT_8(unsigned long value) +{ + return value; +} + +#define KMSAN_INIT_VALUE(val) \ + ({ \ + typeof(val) __ret; \ + switch (sizeof(val)) { \ + case 1: \ + *(unsigned char *)&__ret =3D KMSAN_INIT_1( \ + (unsigned char)val); \ + break; \ + case 2: \ + *(unsigned short *)&__ret =3D KMSAN_INIT_2( \ + (unsigned short)val); \ + break; \ + case 4: \ + *(unsigned int *)&__ret =3D KMSAN_INIT_4( \ + (unsigned int)val); \ + break; \ + case 8: \ + *(unsigned long *)&__ret =3D KMSAN_INIT_8( \ + (unsigned long)val); \ + break; \ + default: \ + BUILD_BUG_ON(1); \ + } \ + __ret; \ + }) /**/ + +void kmsan_ignore_page(struct page *page, int order); +void kmsan_poison_shadow(const void *address, size_t size, gfp_t flags); +void kmsan_unpoison_shadow(const void *address, size_t size); +void kmsan_check_memory(const void *address, size_t size); +void kmsan_check_skb(const struct sk_buff *skb); +void kmsan_handle_urb(const struct urb *urb, bool is_out); +void kmsan_handle_i2c_transfer(struct i2c_msg *msgs, int num); +void kmsan_copy_to_user(const void *to, const void *from, size_t to_copy, + size_t left); +void *__msan_memcpy(void *dst, const void *src, u64 n); +void kmsan_enter_runtime(unsigned long *flags); +void kmsan_leave_runtime(unsigned long *flags); + +#else + +#define KMSAN_INIT_VALUE(value) (value) + +static inline void kmsan_ignore_page(struct page *page, int order) {} +static inline void kmsan_poison_shadow(const void *address, size_t size, + gfp_t flags) {} +static inline void kmsan_unpoison_shadow(const void *address, size_t size)= {} +static inline void kmsan_check_memory(const void *address, size_t size) {} +static inline void kmsan_check_skb(const struct sk_buff *skb) {} +static inline void kmsan_handle_urb(const struct urb *urb, bool is_out) {} +static inline void kmsan_handle_i2c_transfer(struct i2c_msg *msgs, int num= ) {} +static inline void kmsan_copy_to_user( + const void *to, const void *from, size_t to_copy, size_t left) {} +static inline void *__msan_memcpy(void *dst, const void *src, size_t n) +{ + return NULL; +} + +static inline void kmsan_enter_runtime(unsigned long *flags) {} +static inline void kmsan_leave_runtime(unsigned long *flags) {} + +#endif + +#endif /* _LINUX_KMSAN_CHECKS_H */ diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h new file mode 100644 index 000000000000..f5638bac368e --- /dev/null +++ b/include/linux/kmsan.h @@ -0,0 +1,143 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KMSAN API for subsystems. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ +#ifndef LINUX_KMSAN_H +#define LINUX_KMSAN_H + +#include +#include +#include +#include + +struct page; +struct kmem_cache; +struct task_struct; +struct vm_struct; + + +extern bool kmsan_ready; + +#ifdef CONFIG_KMSAN +void __init kmsan_initialize_shadow(void); +void __init kmsan_initialize(void); + +/* These constants are defined in the MSan LLVM instrumentation pass. */ +#define RETVAL_SIZE 800 +#define KMSAN_PARAM_SIZE 800 + +#define PARAM_ARRAY_SIZE (KMSAN_PARAM_SIZE / sizeof(depot_stack_handle_t)) + +struct kmsan_context_state { + char param_tls[KMSAN_PARAM_SIZE]; + char retval_tls[RETVAL_SIZE]; + char va_arg_tls[KMSAN_PARAM_SIZE]; + char va_arg_origin_tls[KMSAN_PARAM_SIZE]; + u64 va_arg_overflow_size_tls; + depot_stack_handle_t param_origin_tls[PARAM_ARRAY_SIZE]; + depot_stack_handle_t retval_origin_tls; + depot_stack_handle_t origin_tls; +}; + +struct kmsan_task_state { + bool allow_reporting; + struct kmsan_context_state cstate; +}; + +void kmsan_task_create(struct task_struct *task); +void kmsan_task_exit(struct task_struct *task); +void kmsan_alloc_shadow_for_region(void *start, size_t size); +int kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags); +void kmsan_gup_pgd_range(struct page **pages, int nr); +void kmsan_free_page(struct page *page, unsigned int order); +void kmsan_split_page(struct page *page, unsigned int order); +void kmsan_copy_page_meta(struct page *dst, struct page *src); + +void kmsan_poison_slab(struct page *page, gfp_t flags); +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags); +void kmsan_kfree_large(const void *ptr); +void kmsan_kmalloc(struct kmem_cache *s, const void *object, size_t size, + gfp_t flags); +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); +void kmsan_slab_free(struct kmem_cache *s, void *object); + +void kmsan_slab_setup_object(struct kmem_cache *s, void *object); +void kmsan_post_alloc_hook(struct kmem_cache *s, gfp_t flags, + size_t size, void *object); + +/* vmap */ +void kmsan_vmap_page_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages); +void kmsan_vunmap_page_range(unsigned long addr, unsigned long end); + +/* ioremap */ +void kmsan_ioremap_page_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot); +void kmsan_iounmap_page_range(unsigned long start, unsigned long end); + +void kmsan_softirq_enter(void); +void kmsan_softirq_exit(void); + +void kmsan_clear_page(void *page_addr); + +#else + +static inline void __init kmsan_initialize_shadow(void) { } +static inline void __init kmsan_initialize(void) { } + +static inline void kmsan_task_create(struct task_struct *task) {} +static inline void kmsan_task_exit(struct task_struct *task) {} +static inline void kmsan_alloc_shadow_for_region(void *start, size_t size)= {} +static inline int kmsan_alloc_page(struct page *page, unsigned int order, + gfp_t flags) +{ + return 0; +} +static inline void kmsan_gup_pgd_range(struct page **pages, int nr) {} +static inline void kmsan_free_page(struct page *page, unsigned int order) = {} +static inline void kmsan_split_page(struct page *page, unsigned int order)= {} +static inline void kmsan_copy_page_meta(struct page *dst, struct page *src= ) {} + +static inline void kmsan_poison_slab(struct page *page, gfp_t flags) {} +static inline void kmsan_kmalloc_large(const void *ptr, size_t size, + gfp_t flags) {} +static inline void kmsan_kfree_large(const void *ptr) {} +static inline void kmsan_kmalloc(struct kmem_cache *s, const void *object, + size_t size, gfp_t flags) {} +static inline void kmsan_slab_alloc(struct kmem_cache *s, void *object, + gfp_t flags) {} +static inline void kmsan_slab_free(struct kmem_cache *s, void *object) {} + +static inline void kmsan_slab_setup_object(struct kmem_cache *s, + void *object) {} +static inline void kmsan_post_alloc_hook(struct kmem_cache *s, gfp_t flags= , + size_t size, void *object) {} + +static inline void kmsan_vmap_page_range_noflush(unsigned long start, + unsigned long end, + pgprot_t prot, + struct page **pages) {} +static inline void kmsan_vunmap_page_range(unsigned long start, + unsigned long end) {} + +static inline void kmsan_ioremap_page_range(unsigned long start, + unsigned long end, + phys_addr_t phys_addr, + pgprot_t prot) {} +static inline void kmsan_iounmap_page_range(unsigned long start, + unsigned long end) {} +static inline void kmsan_softirq_enter(void) {} +static inline void kmsan_softirq_exit(void) {} + +static inline void kmsan_clear_page(void *page_addr) {} +#endif + +#endif /* LINUX_KMSAN_H */ diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 93d97f9b0157..75c36318943d 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -756,6 +756,8 @@ config DEBUG_STACKOVERFLOW =20 source "lib/Kconfig.kasan" =20 +source "lib/Kconfig.kmsan" + endmenu # "Memory Debugging" =20 config ARCH_HAS_KCOV diff --git a/lib/Kconfig.kmsan b/lib/Kconfig.kmsan new file mode 100644 index 000000000000..187dddfcf220 --- /dev/null +++ b/lib/Kconfig.kmsan @@ -0,0 +1,22 @@ +config HAVE_ARCH_KMSAN + bool + +if HAVE_ARCH_KMSAN + +config KMSAN + bool "KMSAN: detector of uninitialized memory use" + depends on SLUB && !KASAN + select STACKDEPOT + help + KMSAN is a dynamic detector of uses of uninitialized memory in the + kernel. It is based on compiler instrumentation provided by Clang + and thus requires Clang 10.0.0+ to build. + +config TEST_KMSAN + tristate "Module for testing KMSAN for bug detection" + depends on m && KMSAN + help + Test module that can trigger various uses of uninitialized memory + detectable by KMSAN. + +endif diff --git a/mm/kmsan/Makefile b/mm/kmsan/Makefile new file mode 100644 index 000000000000..ccf6d2d00a7a --- /dev/null +++ b/mm/kmsan/Makefile @@ -0,0 +1,4 @@ +obj-y :=3D kmsan.o kmsan_instr.o kmsan_init.o kmsan_entry.o kmsan_hooks.o = kmsan_report.o kmsan_shadow.o + +KMSAN_SANITIZE :=3D n +KCOV_INSTRUMENT :=3D n diff --git a/mm/kmsan/kmsan.c b/mm/kmsan/kmsan.c new file mode 100644 index 000000000000..fecb82dc5f4c --- /dev/null +++ b/mm/kmsan/kmsan.c @@ -0,0 +1,570 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN runtime library. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include "../slab.h" +#include "kmsan.h" + +/* + * Some kernel asm() calls mention the non-existing |__force_order| variab= le + * in the asm constraints to preserve the order of accesses to control + * registers. KMSAN turns those mentions into actual memory accesses, ther= efore + * the variable is now required to link the kernel. + */ +unsigned long __force_order; + +bool kmsan_ready; +#define KMSAN_STACK_DEPTH 64 +#define MAX_CHAIN_DEPTH 7 + +/* + * According to Documentation/x86/kernel-stacks, kernel code can run on th= e + * following stacks: + * - regular task stack - when executing the task code + * - interrupt stack - when handling external hardware interrupts and sof= tirqs + * - NMI stack + * 0 is for regular interrupts, 1 for softirqs, 2 for NMI. + * Because interrupts may nest, trying to use a new context for every new + * interrupt. + */ +/* [0] for dummy per-CPU context. */ +DEFINE_PER_CPU(struct kmsan_context_state[KMSAN_NESTED_CONTEXT_MAX], + kmsan_percpu_cstate); +/* 0 for task context, |i>0| for kmsan_context_state[i]. */ +DEFINE_PER_CPU(int, kmsan_context_level); +DEFINE_PER_CPU(int, kmsan_in_interrupt); +DEFINE_PER_CPU(bool, kmsan_in_softirq); +DEFINE_PER_CPU(bool, kmsan_in_nmi); +DEFINE_PER_CPU(int, kmsan_in_runtime); +/* TODO(glider): debug-only. */ +DEFINE_PER_CPU(unsigned long, kmsan_runtime_last_caller); + +struct kmsan_context_state *task_kmsan_context_state(void) +{ + int cpu =3D smp_processor_id(); + int level =3D this_cpu_read(kmsan_context_level); + struct kmsan_context_state *ret; + + if (!kmsan_ready || IN_RUNTIME()) { + ret =3D &per_cpu(kmsan_percpu_cstate[0], cpu); + __memset(ret, 0, sizeof(struct kmsan_context_state)); + return ret; + } + + if (!level) + ret =3D ¤t->kmsan.cstate; + else + ret =3D &per_cpu(kmsan_percpu_cstate[level], cpu); + return ret; +} + +void kmsan_internal_task_create(struct task_struct *task) +{ + struct kmsan_task_state *state =3D &task->kmsan; + + __memset(state, 0, sizeof(struct kmsan_task_state)); + state->allow_reporting =3D true; +} + +void kmsan_internal_memset_shadow(void *addr, int b, size_t size, + bool checked) +{ + void *shadow_start; + u64 page_offset, address =3D (u64)addr; + size_t to_fill; + + BUG_ON(!metadata_is_contiguous(addr, size, META_SHADOW)); + while (size) { + page_offset =3D address % PAGE_SIZE; + to_fill =3D min(PAGE_SIZE - page_offset, (u64)size); + shadow_start =3D kmsan_get_metadata((void *)address, to_fill, + META_SHADOW); + if (!shadow_start) { + if (checked) { + kmsan_pr_locked("WARNING: not memsetting %d bytes starting at %px, bec= ause the shadow is NULL\n", to_fill, address); + BUG(); + } + /* Otherwise just move on. */ + } else { + __memset(shadow_start, b, to_fill); + } + address +=3D to_fill; + size -=3D to_fill; + } +} + +void kmsan_internal_poison_shadow(void *address, size_t size, + gfp_t flags, unsigned int poison_flags) +{ + bool checked =3D poison_flags & KMSAN_POISON_CHECK; + depot_stack_handle_t handle; + u32 extra_bits =3D 0; + + if (poison_flags & KMSAN_POISON_FREE) + extra_bits =3D 1; + kmsan_internal_memset_shadow(address, -1, size, checked); + handle =3D kmsan_save_stack_with_flags(flags, extra_bits); + kmsan_set_origin_checked(address, size, handle, checked); +} + +void kmsan_internal_unpoison_shadow(void *address, size_t size, bool check= ed) +{ + kmsan_internal_memset_shadow(address, 0, size, checked); + kmsan_set_origin_checked(address, size, 0, checked); +} + +depot_stack_handle_t kmsan_save_stack_with_flags(gfp_t flags, + unsigned int reserved) +{ + depot_stack_handle_t handle; + unsigned long entries[KMSAN_STACK_DEPTH]; + unsigned int nr_entries; + + nr_entries =3D stack_trace_save(entries, KMSAN_STACK_DEPTH, 0); + filter_irq_stacks(entries, nr_entries); + + /* Don't sleep (see might_sleep_if() in __alloc_pages_nodemask()). */ + flags &=3D ~__GFP_DIRECT_RECLAIM; + + handle =3D stack_depot_save(entries, nr_entries, flags); + return set_dsh_extra_bits(handle, reserved); +} + +/* + * Depending on the value of is_memmove, this serves as both a memcpy and = a + * memmove implementation. + * + * As with the regular memmove, do the following: + * - if src and dst don't overlap, use memcpy(); + * - if src and dst overlap: + * - if src > dst, use memcpy(); + * - if src < dst, use reverse-memcpy. + * Why this is correct: + * - problems may arise if for some part of the overlapping region we + * overwrite its shadow with a new value before copying it somewhere. + * But there's a 1:1 mapping between the kernel memory and its shadow, + * therefore if this doesn't happen with the kernel memory it can't happ= en + * with the shadow. + */ +void kmsan_memcpy_memmove_metadata(void *dst, void *src, size_t n, + bool is_memmove) +{ + void *shadow_src, *shadow_dst; + depot_stack_handle_t *origin_src, *origin_dst; + int src_slots, dst_slots, i, iter, step, skip_bits; + depot_stack_handle_t old_origin =3D 0, chain_origin, new_origin =3D 0; + u32 *align_shadow_src, shadow; + bool backwards; + + BUG_ON(!metadata_is_contiguous(dst, n, META_SHADOW)); + BUG_ON(!metadata_is_contiguous(src, n, META_SHADOW)); + + shadow_dst =3D kmsan_get_metadata(dst, n, META_SHADOW); + if (!shadow_dst) + return; + + shadow_src =3D kmsan_get_metadata(src, n, META_SHADOW); + if (!shadow_src) { + /* + * |src| is untracked: zero out destination shadow, ignore the + * origins, we're done. + */ + __memset(shadow_dst, 0, n); + return; + } + if (is_memmove) + __memmove(shadow_dst, shadow_src, n); + else + __memcpy(shadow_dst, shadow_src, n); + + origin_dst =3D kmsan_get_metadata(dst, n, META_ORIGIN); + origin_src =3D kmsan_get_metadata(src, n, META_ORIGIN); + BUG_ON(!origin_dst || !origin_src); + BUG_ON(!metadata_is_contiguous(dst, n, META_ORIGIN)); + BUG_ON(!metadata_is_contiguous(src, n, META_ORIGIN)); + src_slots =3D (ALIGN((u64)src + n, ORIGIN_SIZE) - + ALIGN_DOWN((u64)src, ORIGIN_SIZE)) / ORIGIN_SIZE; + dst_slots =3D (ALIGN((u64)dst + n, ORIGIN_SIZE) - + ALIGN_DOWN((u64)dst, ORIGIN_SIZE)) / ORIGIN_SIZE; + BUG_ON(!src_slots || !dst_slots); + BUG_ON((src_slots < 1) || (dst_slots < 1)); + BUG_ON((src_slots - dst_slots > 1) || (dst_slots - src_slots < -1)); + + backwards =3D is_memmove && (dst > src); + i =3D backwards ? min(src_slots, dst_slots) - 1 : 0; + iter =3D backwards ? -1 : 1; + + align_shadow_src =3D (u32 *)ALIGN_DOWN((u64)shadow_src, ORIGIN_SIZE); + for (step =3D 0; step < min(src_slots, dst_slots); step++, i +=3D iter) { + BUG_ON(i < 0); + shadow =3D align_shadow_src[i]; + if (i =3D=3D 0) { + /* + * If |src| isn't aligned on ORIGIN_SIZE, don't + * look at the first |src % ORIGIN_SIZE| bytes + * of the first shadow slot. + */ + skip_bits =3D ((u64)src % ORIGIN_SIZE) * 8; + shadow =3D (shadow << skip_bits) >> skip_bits; + } + if (i =3D=3D src_slots - 1) { + /* + * If |src + n| isn't aligned on + * ORIGIN_SIZE, don't look at the last + * |(src + n) % ORIGIN_SIZE| bytes of the + * last shadow slot. + */ + skip_bits =3D (((u64)src + n) % ORIGIN_SIZE) * 8; + shadow =3D (shadow >> skip_bits) << skip_bits; + } + /* + * Overwrite the origin only if the corresponding + * shadow is nonempty. + */ + if (origin_src[i] && (origin_src[i] !=3D old_origin) && shadow) { + old_origin =3D origin_src[i]; + chain_origin =3D kmsan_internal_chain_origin(old_origin); + /* + * kmsan_internal_chain_origin() may return + * NULL, but we don't want to lose the previous + * origin value. + */ + if (chain_origin) + new_origin =3D chain_origin; + else + new_origin =3D old_origin; + } + if (shadow) + origin_dst[i] =3D new_origin; + else + origin_dst[i] =3D 0; + } +} + +void kmsan_memcpy_metadata(void *dst, void *src, size_t n) +{ + kmsan_memcpy_memmove_metadata(dst, src, n, /*is_memmove*/false); +} + +void kmsan_memmove_metadata(void *dst, void *src, size_t n) +{ + kmsan_memcpy_memmove_metadata(dst, src, n, /*is_memmove*/true); +} + +depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id) +{ + depot_stack_handle_t handle; + unsigned long entries[3]; + u64 magic =3D KMSAN_CHAIN_MAGIC_ORIGIN_FULL; + int depth =3D 0; + static int skipped; + u32 extra_bits; + + if (!kmsan_ready) + return 0; + + if (!id) + return id; + /* + * Make sure we have enough spare bits in |id| to hold the UAF bit and + * the chain depth. + */ + BUILD_BUG_ON((1 << STACK_DEPOT_EXTRA_BITS) <=3D (MAX_CHAIN_DEPTH << 1)); + + extra_bits =3D get_dsh_extra_bits(id); + + depth =3D extra_bits >> 1; + if (depth >=3D MAX_CHAIN_DEPTH) { + skipped++; + if (skipped % 10000 =3D=3D 0) { + kmsan_pr_locked("not chained %d origins\n", skipped); + dump_stack(); + kmsan_print_origin(id); + } + return id; + } + depth++; + /* Lowest bit is the UAF flag, higher bits hold the depth. */ + extra_bits =3D (depth << 1) | (extra_bits & 1); + /* TODO(glider): how do we figure out we've dropped some frames? */ + entries[0] =3D magic + depth; + entries[1] =3D kmsan_save_stack_with_flags(GFP_ATOMIC, extra_bits); + entries[2] =3D id; + handle =3D stack_depot_save(entries, ARRAY_SIZE(entries), GFP_ATOMIC); + return set_dsh_extra_bits(handle, extra_bits); +} + +void kmsan_write_aligned_origin(void *var, size_t size, u32 origin) +{ + u32 *var_cast =3D (u32 *)var; + int i; + + BUG_ON((u64)var_cast % ORIGIN_SIZE); + BUG_ON(size % ORIGIN_SIZE); + for (i =3D 0; i < size / ORIGIN_SIZE; i++) + var_cast[i] =3D origin; +} + +/* + * TODO(glider): writing an initialized byte shouldn't zero out the origin= , if + * the remaining three bytes are uninitialized. + */ +void kmsan_internal_set_origin(void *addr, int size, u32 origin) +{ + void *origin_start; + u64 address =3D (u64)addr, page_offset; + size_t to_fill, pad =3D 0; + + if (!IS_ALIGNED(address, ORIGIN_SIZE)) { + pad =3D address % ORIGIN_SIZE; + address -=3D pad; + size +=3D pad; + } + + while (size > 0) { + page_offset =3D address % PAGE_SIZE; + to_fill =3D min(PAGE_SIZE - page_offset, (u64)size); + /* write at least ORIGIN_SIZE bytes */ + to_fill =3D ALIGN(to_fill, ORIGIN_SIZE); + BUG_ON(!to_fill); + origin_start =3D kmsan_get_metadata((void *)address, to_fill, + META_ORIGIN); + address +=3D to_fill; + size -=3D to_fill; + if (!origin_start) + /* Can happen e.g. if the memory is untracked. */ + continue; + kmsan_write_aligned_origin(origin_start, to_fill, origin); + } +} + +void kmsan_set_origin_checked(void *addr, int size, u32 origin, bool check= ed) +{ + if (checked && !metadata_is_contiguous(addr, size, META_ORIGIN)) { + kmsan_pr_locked("WARNING: not setting origin for %d bytes starting at %p= x, because the metadata is incontiguous\n", size, addr); + BUG(); + } + kmsan_internal_set_origin(addr, size, origin); +} + +struct page *vmalloc_to_page_or_null(void *vaddr) +{ + struct page *page; + + if (!kmsan_internal_is_vmalloc_addr(vaddr) && + !kmsan_internal_is_module_addr(vaddr)) + return NULL; + page =3D vmalloc_to_page(vaddr); + if (pfn_valid(page_to_pfn(page))) + return page; + else + return NULL; +} + +void kmsan_internal_check_memory(void *addr, size_t size, const void *user= _addr, + int reason) +{ + unsigned long irq_flags; + unsigned long addr64 =3D (unsigned long)addr; + unsigned char *shadow =3D NULL; + depot_stack_handle_t *origin =3D NULL; + depot_stack_handle_t cur_origin =3D 0, new_origin =3D 0; + int cur_off_start =3D -1; + int i, chunk_size; + size_t pos =3D 0; + + BUG_ON(!metadata_is_contiguous(addr, size, META_SHADOW)); + if (size <=3D 0) + return; + while (pos < size) { + chunk_size =3D min(size - pos, + PAGE_SIZE - ((addr64 + pos) % PAGE_SIZE)); + shadow =3D kmsan_get_metadata((void *)(addr64 + pos), chunk_size, + META_SHADOW); + if (!shadow) { + /* + * This page is untracked. If there were uninitialized + * bytes before, report them. + */ + if (cur_origin) { + ENTER_RUNTIME(irq_flags); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos - 1, user_addr, + reason); + LEAVE_RUNTIME(irq_flags); + } + cur_origin =3D 0; + cur_off_start =3D -1; + pos +=3D chunk_size; + continue; + } + for (i =3D 0; i < chunk_size; i++) { + if (!shadow[i]) { + /* + * This byte is unpoisoned. If there were + * poisoned bytes before, report them. + */ + if (cur_origin) { + ENTER_RUNTIME(irq_flags); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos + i - 1, + user_addr, reason); + LEAVE_RUNTIME(irq_flags); + } + cur_origin =3D 0; + cur_off_start =3D -1; + continue; + } + origin =3D kmsan_get_metadata((void *)(addr64 + pos + i), + chunk_size - i, META_ORIGIN); + BUG_ON(!origin); + new_origin =3D *origin; + /* + * Encountered new origin - report the previous + * uninitialized range. + */ + if (cur_origin !=3D new_origin) { + if (cur_origin) { + ENTER_RUNTIME(irq_flags); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos + i - 1, + user_addr, reason); + LEAVE_RUNTIME(irq_flags); + } + cur_origin =3D new_origin; + cur_off_start =3D pos + i; + } + } + pos +=3D chunk_size; + } + BUG_ON(pos !=3D size); + if (cur_origin) { + ENTER_RUNTIME(irq_flags); + kmsan_report(cur_origin, addr, size, cur_off_start, pos - 1, + user_addr, reason); + LEAVE_RUNTIME(irq_flags); + } +} + +/* + * TODO(glider): this check shouldn't be performed for origin pages, becau= se + * they're always accessed after the shadow pages. + */ +bool metadata_is_contiguous(void *addr, size_t size, bool is_origin) +{ + u64 cur_addr =3D (u64)addr, next_addr; + char *cur_meta =3D NULL, *next_meta =3D NULL; + depot_stack_handle_t *origin_p; + bool all_untracked =3D false; + const char *fname =3D is_origin ? "origin" : "shadow"; + + if (!size) + return true; + + /* The whole range belongs to the same page. */ + if (ALIGN_DOWN(cur_addr + size - 1, PAGE_SIZE) =3D=3D + ALIGN_DOWN(cur_addr, PAGE_SIZE)) + return true; + cur_meta =3D kmsan_get_metadata((void *)cur_addr, 1, is_origin); + if (!cur_meta) + all_untracked =3D true; + for (next_addr =3D cur_addr + PAGE_SIZE; next_addr < (u64)addr + size; + cur_addr =3D next_addr, + cur_meta =3D next_meta, + next_addr +=3D PAGE_SIZE) { + next_meta =3D kmsan_get_metadata((void *)next_addr, 1, is_origin); + if (!next_meta) { + if (!all_untracked) + goto report; + continue; + } + if ((u64)cur_meta =3D=3D ((u64)next_meta - PAGE_SIZE)) + continue; + goto report; + } + return true; + +report: + kmsan_pr_locked("BUG: attempting to access two shadow page ranges.\n"); + dump_stack(); + kmsan_pr_locked("\n"); + kmsan_pr_locked("Access of size %d at %px.\n", size, addr); + kmsan_pr_locked("Addresses belonging to different ranges: %px and %px\n", + cur_addr, next_addr); + kmsan_pr_locked("page[0].%s: %px, page[1].%s: %px\n", + fname, cur_meta, fname, next_meta); + origin_p =3D kmsan_get_metadata(addr, 1, META_ORIGIN); + if (origin_p) { + kmsan_pr_locked("Origin: %08x\n", *origin_p); + kmsan_print_origin(*origin_p); + } else { + kmsan_pr_locked("Origin: unavailable\n"); + } + return false; +} + +/* + * Dummy replacement for __builtin_return_address() which may crash withou= t + * frame pointers. + */ +void *kmsan_internal_return_address(int arg) +{ +#ifdef CONFIG_UNWINDER_FRAME_POINTER + switch (arg) { + case 1: + return __builtin_return_address(1); + case 2: + return __builtin_return_address(2); + default: + BUG(); + } +#else + unsigned long entries[1]; + struct stack_trace trace =3D { + .nr_entries =3D 0, + .entries =3D entries, + .max_entries =3D 1, + .skip =3D arg + }; + save_stack_trace(&trace); + return entries[0]; +#endif +} + +bool kmsan_internal_is_module_addr(void *vaddr) +{ + return ((u64)vaddr >=3D MODULES_VADDR) && ((u64)vaddr < MODULES_END); +} + +bool kmsan_internal_is_vmalloc_addr(void *addr) +{ + return ((u64)addr >=3D VMALLOC_START) && ((u64)addr < VMALLOC_END); +} diff --git a/mm/kmsan/kmsan.h b/mm/kmsan/kmsan.h new file mode 100644 index 000000000000..4cb3723e2d76 --- /dev/null +++ b/mm/kmsan/kmsan.h @@ -0,0 +1,149 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KMSAN internal declarations. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#ifndef __MM_KMSAN_KMSAN_H +#define __MM_KMSAN_KMSAN_H + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kmsan_shadow.h" + +#define KMSAN_MAGIC_MASK 0xffffffffff00 +#define KMSAN_ALLOCA_MAGIC_ORIGIN 0x4110c4071900 +#define KMSAN_CHAIN_MAGIC_ORIGIN_FULL 0xd419170cba00 + +#define KMSAN_POISON_NOCHECK 0x0 +#define KMSAN_POISON_CHECK 0x1 +#define KMSAN_POISON_FREE 0x2 + +#define ORIGIN_SIZE 4 + +#define META_SHADOW (false) +#define META_ORIGIN (true) + +#define KMSAN_NESTED_CONTEXT_MAX (8) +/* [0] for dummy per-CPU context */ +DECLARE_PER_CPU(struct kmsan_context_state[KMSAN_NESTED_CONTEXT_MAX], + kmsan_percpu_cstate); +/* 0 for task context, |i>0| for kmsan_context_state[i]. */ +DECLARE_PER_CPU(int, kmsan_context_level); +DECLARE_PER_CPU(int, kmsan_in_interrupt); +DECLARE_PER_CPU(bool, kmsan_in_softirq); +DECLARE_PER_CPU(bool, kmsan_in_nmi); + +extern spinlock_t report_lock; + +/* Stolen from kernel/printk/internal.h */ +#define PRINTK_SAFE_CONTEXT_MASK 0x3fffffff + +/* Called by kmsan_report.c under a lock. */ +#define kmsan_pr_err(...) pr_err(__VA_ARGS__) + +/* Used in other places - doesn't require a lock. */ +#define kmsan_pr_locked(...) \ + do { \ + unsigned long flags; \ + spin_lock_irqsave(&report_lock, flags); \ + pr_err(__VA_ARGS__); \ + spin_unlock_irqrestore(&report_lock, flags); \ + } while (0) + +void kmsan_print_origin(depot_stack_handle_t origin); +void kmsan_report(depot_stack_handle_t origin, + void *address, int size, int off_first, int off_last, + const void *user_addr, int reason); + + +enum KMSAN_BUG_REASON { + REASON_ANY =3D 0, + REASON_COPY_TO_USER =3D 1, + REASON_USE_AFTER_FREE =3D 2, + REASON_SUBMIT_URB =3D 3, +}; + +/* + * When a compiler hook is invoked, it may make a call to instrumented cod= e + * and eventually call itself recursively. To avoid that, we protect the + * runtime entry points with ENTER_RUNTIME()/LEAVE_RUNTIME() macros and ex= it + * the hook if IN_RUNTIME() is true. But when an interrupt occurs inside t= he + * runtime, the hooks won=E2=80=99t run either, which may lead to errors. + * Therefore we have to disable interrupts inside the runtime. + */ +DECLARE_PER_CPU(int, kmsan_in_runtime); +DECLARE_PER_CPU(unsigned long, kmsan_runtime_last_caller); +#define IN_RUNTIME() (this_cpu_read(kmsan_in_runtime)) +#define ENTER_RUNTIME(irq_flags) \ + do { \ + preempt_disable(); \ + local_irq_save(irq_flags); \ + stop_nmi(); \ + this_cpu_inc(kmsan_in_runtime); \ + this_cpu_write(kmsan_runtime_last_caller, _THIS_IP_); \ + BUG_ON(this_cpu_read(kmsan_in_runtime) > 1); \ + } while (0) +#define LEAVE_RUNTIME(irq_flags) \ + do { \ + this_cpu_dec(kmsan_in_runtime); \ + if (this_cpu_read(kmsan_in_runtime)) { \ + kmsan_pr_err("kmsan_in_runtime: %d, last_caller: %pS\n", \ + this_cpu_read(kmsan_in_runtime), \ + this_cpu_read(kmsan_runtime_last_caller)); \ + BUG(); \ + } \ + restart_nmi(); \ + local_irq_restore(irq_flags); \ + preempt_enable(); } while (0) + +void kmsan_memcpy_metadata(void *dst, void *src, size_t n); +void kmsan_memmove_metadata(void *dst, void *src, size_t n); + +depot_stack_handle_t kmsan_save_stack(void); +depot_stack_handle_t kmsan_save_stack_with_flags(gfp_t flags, + unsigned int extra_bits); +void kmsan_internal_poison_shadow(void *address, size_t size, gfp_t flags, + unsigned int poison_flags); +void kmsan_internal_unpoison_shadow(void *address, size_t size, bool check= ed); +void kmsan_internal_memset_shadow(void *address, int b, size_t size, + bool checked); +depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id); +void kmsan_write_aligned_origin(void *var, size_t size, u32 origin); + +void kmsan_internal_task_create(struct task_struct *task); +void kmsan_internal_set_origin(void *addr, int size, u32 origin); +void kmsan_set_origin_checked(void *addr, int size, u32 origin, bool check= ed); + +struct kmsan_context_state *task_kmsan_context_state(void); + +bool metadata_is_contiguous(void *addr, size_t size, bool is_origin); +void kmsan_internal_check_memory(void *addr, size_t size, const void *user= _addr, + int reason); + +struct page *vmalloc_to_page_or_null(void *vaddr); + +/* Declared in mm/vmalloc.c */ +void __vunmap_page_range(unsigned long addr, unsigned long end); +int __vmap_page_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages); + +void *kmsan_internal_return_address(int arg); +bool kmsan_internal_is_module_addr(void *vaddr); +bool kmsan_internal_is_vmalloc_addr(void *addr); + +#endif /* __MM_KMSAN_KMSAN_H */ diff --git a/mm/kmsan/kmsan_entry.c b/mm/kmsan/kmsan_entry.c new file mode 100644 index 000000000000..9511a7dad541 --- /dev/null +++ b/mm/kmsan/kmsan_entry.c @@ -0,0 +1,130 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN hooks for entry_64.S + * + * Copyright (C) 2018-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include "kmsan.h" + +static void kmsan_context_enter(void) +{ + int level =3D this_cpu_read(kmsan_context_level) + 1; + + BUG_ON(level >=3D KMSAN_NESTED_CONTEXT_MAX); + this_cpu_write(kmsan_context_level, level); +} + +static void kmsan_context_exit(void) +{ + int level =3D this_cpu_read(kmsan_context_level) - 1; + + BUG_ON(level < 0); + this_cpu_write(kmsan_context_level, level); +} + +void kmsan_interrupt_enter(void) +{ + int in_interrupt =3D this_cpu_read(kmsan_in_interrupt); + + /* Turns out it's possible for in_interrupt to be >0 here. */ + kmsan_context_enter(); + BUG_ON(in_interrupt > 1); + /* Can't check preempt_count() here, it may be zero. */ + this_cpu_write(kmsan_in_interrupt, in_interrupt + 1); +} +EXPORT_SYMBOL(kmsan_interrupt_enter); + +void kmsan_interrupt_exit(void) +{ + int in_interrupt =3D this_cpu_read(kmsan_in_interrupt); + + BUG_ON(!in_interrupt); + kmsan_context_exit(); + /* Can't check preempt_count() here, it may be zero. */ + this_cpu_write(kmsan_in_interrupt, in_interrupt - 1); +} +EXPORT_SYMBOL(kmsan_interrupt_exit); + +void kmsan_softirq_enter(void) +{ + bool in_softirq =3D this_cpu_read(kmsan_in_softirq); + + BUG_ON(in_softirq); + kmsan_context_enter(); + /* Can't check preempt_count() here, it may be zero. */ + this_cpu_write(kmsan_in_softirq, true); +} +EXPORT_SYMBOL(kmsan_softirq_enter); + +void kmsan_softirq_exit(void) +{ + bool in_softirq =3D this_cpu_read(kmsan_in_softirq); + + BUG_ON(!in_softirq); + kmsan_context_exit(); + /* Can't check preempt_count() here, it may be zero. */ + this_cpu_write(kmsan_in_softirq, false); +} +EXPORT_SYMBOL(kmsan_softirq_exit); + +void kmsan_nmi_enter(void) +{ + bool in_nmi =3D this_cpu_read(kmsan_in_nmi); + + BUG_ON(in_nmi); + BUG_ON(preempt_count() & NMI_MASK); + kmsan_context_enter(); + this_cpu_write(kmsan_in_nmi, true); +} +EXPORT_SYMBOL(kmsan_nmi_enter); + +void kmsan_nmi_exit(void) +{ + bool in_nmi =3D this_cpu_read(kmsan_in_nmi); + + BUG_ON(!in_nmi); + BUG_ON(preempt_count() & NMI_MASK); + kmsan_context_exit(); + this_cpu_write(kmsan_in_nmi, false); + +} +EXPORT_SYMBOL(kmsan_nmi_exit); + +void kmsan_syscall_enter(void) +{ + +} +EXPORT_SYMBOL(kmsan_syscall_enter); + +void kmsan_syscall_exit(void) +{ + +} +EXPORT_SYMBOL(kmsan_syscall_exit); + +void kmsan_ist_enter(u64 shift_ist) +{ + kmsan_context_enter(); +} +EXPORT_SYMBOL(kmsan_ist_enter); + +void kmsan_ist_exit(u64 shift_ist) +{ + kmsan_context_exit(); +} +EXPORT_SYMBOL(kmsan_ist_exit); + +void kmsan_unpoison_pt_regs(struct pt_regs *regs) +{ + if (!kmsan_ready || IN_RUNTIME()) + return; + kmsan_internal_unpoison_shadow(regs, sizeof(*regs), /*checked*/true); +} +EXPORT_SYMBOL(kmsan_unpoison_pt_regs); diff --git a/mm/kmsan/kmsan_hooks.c b/mm/kmsan/kmsan_hooks.c new file mode 100644 index 000000000000..37b362d0cea9 --- /dev/null +++ b/mm/kmsan/kmsan_hooks.c @@ -0,0 +1,393 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN hooks for kernel subsystems. + * + * These functions handle creation of KMSAN metadata for memory allocation= s. + * + * Copyright (C) 2018-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../slab.h" +#include "kmsan.h" + +/* TODO(glider): do we need to export these symbols? */ + +/* + * The functions may call back to instrumented code, which, in turn, may c= all + * these hooks again. To avoid re-entrancy, we use __GFP_NO_KMSAN_SHADOW. + * Instrumented functions shouldn't be called under + * ENTER_RUNTIME()/LEAVE_RUNTIME(), because this will lead to skipping + * effects of functions like memset() inside instrumented code. + */ +/* Called from kernel/kthread.c, kernel/fork.c */ +void kmsan_task_create(struct task_struct *task) +{ + unsigned long irq_flags; + + if (!task) + return; + ENTER_RUNTIME(irq_flags); + kmsan_internal_task_create(task); + LEAVE_RUNTIME(irq_flags); +} +EXPORT_SYMBOL(kmsan_task_create); + + +/* Called from kernel/exit.c */ +void kmsan_task_exit(struct task_struct *task) +{ + unsigned long irq_flags; + struct kmsan_task_state *state =3D &task->kmsan; + + if (!kmsan_ready || IN_RUNTIME()) + return; + + ENTER_RUNTIME(irq_flags); + state->allow_reporting =3D false; + + LEAVE_RUNTIME(irq_flags); +} +EXPORT_SYMBOL(kmsan_task_exit); + +/* Called from mm/slub.c */ +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags) +{ + unsigned long irq_flags; + + if (unlikely(object =3D=3D NULL)) + return; + if (!kmsan_ready || IN_RUNTIME()) + return; + /* + * There's a ctor or this is an RCU cache - do nothing. The memory + * status hasn't changed since last use. + */ + if (s->ctor || (s->flags & SLAB_TYPESAFE_BY_RCU)) + return; + + ENTER_RUNTIME(irq_flags); + if (flags & __GFP_ZERO) { + kmsan_internal_unpoison_shadow(object, s->object_size, + KMSAN_POISON_CHECK); + } else { + kmsan_internal_poison_shadow(object, s->object_size, flags, + KMSAN_POISON_CHECK); + } + LEAVE_RUNTIME(irq_flags); +} + +/* Called from mm/slub.c */ +void kmsan_slab_free(struct kmem_cache *s, void *object) +{ + unsigned long irq_flags; + + if (!kmsan_ready || IN_RUNTIME()) + return; + ENTER_RUNTIME(irq_flags); + + /* RCU slabs could be legally used after free within the RCU period */ + if (unlikely(s->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON))) + goto leave; + if (s->ctor) + goto leave; + kmsan_internal_poison_shadow(object, s->object_size, + GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); +leave: + LEAVE_RUNTIME(irq_flags); +} + +/* Called from mm/slub.c */ +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) +{ + unsigned long irq_flags; + + if (unlikely(ptr =3D=3D NULL)) + return; + if (!kmsan_ready || IN_RUNTIME()) + return; + ENTER_RUNTIME(irq_flags); + if (flags & __GFP_ZERO) { + kmsan_internal_unpoison_shadow((void *)ptr, size, + /*checked*/true); + } else { + kmsan_internal_poison_shadow((void *)ptr, size, flags, + KMSAN_POISON_CHECK); + } + LEAVE_RUNTIME(irq_flags); +} + +/* Called from mm/slub.c */ +void kmsan_kfree_large(const void *ptr) +{ + struct page *page; + unsigned long irq_flags; + + if (!kmsan_ready || IN_RUNTIME()) + return; + ENTER_RUNTIME(irq_flags); + page =3D virt_to_head_page((void *)ptr); + BUG_ON(ptr !=3D page_address(page)); + kmsan_internal_poison_shadow( + (void *)ptr, PAGE_SIZE << compound_order(page), GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + LEAVE_RUNTIME(irq_flags); +} + + +static unsigned long vmalloc_shadow(unsigned long addr) +{ + return (unsigned long)kmsan_get_metadata((void *)addr, 1, META_SHADOW); +} + +static unsigned long vmalloc_origin(unsigned long addr) +{ + return (unsigned long)kmsan_get_metadata((void *)addr, 1, META_ORIGIN); +} + +/* Called from mm/vmalloc.c */ +void kmsan_vunmap_page_range(unsigned long start, unsigned long end) +{ + __vunmap_page_range(vmalloc_shadow(start), vmalloc_shadow(end)); + __vunmap_page_range(vmalloc_origin(start), vmalloc_origin(end)); +} + +/* Called from lib/ioremap.c */ +/* + * This function creates new shadow/origin pages for the physical pages ma= pped + * into the virtual memory. If those physical pages already had shadow/ori= gin, + * those are ignored. + */ +void kmsan_ioremap_page_range(unsigned long start, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot) +{ + unsigned long irq_flags; + struct page *shadow, *origin; + int i, nr; + unsigned long off =3D 0; + gfp_t gfp_mask =3D GFP_KERNEL | __GFP_ZERO | __GFP_NO_KMSAN_SHADOW; + + if (!kmsan_ready || IN_RUNTIME()) + return; + + nr =3D (end - start) / PAGE_SIZE; + ENTER_RUNTIME(irq_flags); + for (i =3D 0; i < nr; i++, off +=3D PAGE_SIZE) { + shadow =3D alloc_pages(gfp_mask, 1); + origin =3D alloc_pages(gfp_mask, 1); + __vmap_page_range_noflush(vmalloc_shadow(start + off), + vmalloc_shadow(start + off + PAGE_SIZE), + prot, &shadow); + __vmap_page_range_noflush(vmalloc_origin(start + off), + vmalloc_origin(start + off + PAGE_SIZE), + prot, &origin); + } + flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); + flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); + LEAVE_RUNTIME(irq_flags); +} + +void kmsan_iounmap_page_range(unsigned long start, unsigned long end) +{ + int i, nr; + struct page *shadow, *origin; + unsigned long v_shadow, v_origin; + unsigned long irq_flags; + + if (!kmsan_ready || IN_RUNTIME()) + return; + + nr =3D (end - start) / PAGE_SIZE; + ENTER_RUNTIME(irq_flags); + v_shadow =3D (unsigned long)vmalloc_shadow(start); + v_origin =3D (unsigned long)vmalloc_origin(start); + for (i =3D 0; i < nr; i++, v_shadow +=3D PAGE_SIZE, v_origin +=3D PAGE_SI= ZE) { + shadow =3D vmalloc_to_page_or_null((void *)v_shadow); + origin =3D vmalloc_to_page_or_null((void *)v_origin); + __vunmap_page_range(v_shadow, v_shadow + PAGE_SIZE); + __vunmap_page_range(v_origin, v_origin + PAGE_SIZE); + if (shadow) + __free_pages(shadow, 1); + if (origin) + __free_pages(origin, 1); + } + LEAVE_RUNTIME(irq_flags); +} + +/* Called from include/linux/uaccess.h, include/linux/uaccess.h */ +void kmsan_copy_to_user(const void *to, const void *from, + size_t to_copy, size_t left) +{ + void *shadow; + + if (!kmsan_ready || IN_RUNTIME()) + return; + /* + * At this point we've copied the memory already. It's hard to check it + * before copying, as the size of actually copied buffer is unknown. + */ + + /* copy_to_user() may copy zero bytes. No need to check. */ + if (!to_copy) + return; + /* Or maybe copy_to_user() failed to copy anything. */ + if (to_copy =3D=3D left) + return; + if ((u64)to < TASK_SIZE) { + /* This is a user memory access, check it. */ + kmsan_internal_check_memory((void *)from, to_copy - left, to, + REASON_COPY_TO_USER); + return; + } + /* Otherwise this is a kernel memory access. This happens when a compat + * syscall passes an argument allocated on the kernel stack to a real + * syscall. + * Don't check anything, just copy the shadow of the copied bytes. + */ + shadow =3D kmsan_get_metadata((void *)to, to_copy - left, META_SHADOW); + if (shadow) + kmsan_memcpy_metadata((void *)to, (void *)from, to_copy - left); +} +EXPORT_SYMBOL(kmsan_copy_to_user); + +void kmsan_poison_shadow(const void *address, size_t size, gfp_t flags) +{ + unsigned long irq_flags; + + if (!kmsan_ready || IN_RUNTIME()) + return; + ENTER_RUNTIME(irq_flags); + /* The users may want to poison/unpoison random memory. */ + kmsan_internal_poison_shadow((void *)address, size, flags, + KMSAN_POISON_NOCHECK); + LEAVE_RUNTIME(irq_flags); +} +EXPORT_SYMBOL(kmsan_poison_shadow); + +void kmsan_unpoison_shadow(const void *address, size_t size) +{ + unsigned long irq_flags; + + if (!kmsan_ready || IN_RUNTIME()) + return; + + ENTER_RUNTIME(irq_flags); + /* The users may want to poison/unpoison random memory. */ + kmsan_internal_unpoison_shadow((void *)address, size, + KMSAN_POISON_NOCHECK); + LEAVE_RUNTIME(irq_flags); +} +EXPORT_SYMBOL(kmsan_unpoison_shadow); + +void kmsan_check_memory(const void *addr, size_t size) +{ + return kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); +} +EXPORT_SYMBOL(kmsan_check_memory); + +void kmsan_gup_pgd_range(struct page **pages, int nr) +{ + int i; + void *page_addr; + + /* + * gup_pgd_range() has just created a number of new pages that KMSAN + * treats as uninitialized. In the case they belong to the userspace + * memory, unpoison the corresponding kernel pages. + */ + for (i =3D 0; i < nr; i++) { + page_addr =3D page_address(pages[i]); + if (((u64)page_addr < TASK_SIZE) && + ((u64)page_addr + PAGE_SIZE < TASK_SIZE)) + kmsan_unpoison_shadow(page_addr, PAGE_SIZE); + } + +} +EXPORT_SYMBOL(kmsan_gup_pgd_range); + +/* Helper function to check an SKB. */ +void kmsan_check_skb(const struct sk_buff *skb) +{ + int start =3D skb_headlen(skb); + struct sk_buff *frag_iter; + int i, copy =3D 0; + skb_frag_t *f; + u32 p_off, p_len, copied; + struct page *p; + u8 *vaddr; + + if (!skb || !skb->len) + return; + + kmsan_internal_check_memory(skb->data, skb_headlen(skb), 0, REASON_ANY); + if (skb_is_nonlinear(skb)) { + for (i =3D 0; i < skb_shinfo(skb)->nr_frags; i++) { + f =3D &skb_shinfo(skb)->frags[i]; + + skb_frag_foreach_page(f, + skb_frag_off(f) - start, + copy, p, p_off, p_len, copied) { + + vaddr =3D kmap_atomic(p); + kmsan_internal_check_memory(vaddr + p_off, + p_len, /*user_addr*/ 0, + REASON_ANY); + kunmap_atomic(vaddr); + } + } + } + skb_walk_frags(skb, frag_iter) + kmsan_check_skb(frag_iter); +} +EXPORT_SYMBOL(kmsan_check_skb); + +/* Helper function to check an URB. */ +void kmsan_handle_urb(const struct urb *urb, bool is_out) +{ + if (!urb) + return; + if (is_out) + kmsan_internal_check_memory(urb->transfer_buffer, + urb->transfer_buffer_length, + /*user_addr*/ 0, REASON_SUBMIT_URB); + else + kmsan_internal_unpoison_shadow(urb->transfer_buffer, + urb->transfer_buffer_length, + /*checked*/false); +} +EXPORT_SYMBOL(kmsan_handle_urb); + +/* Helper function to check I2C-transferred data. */ +void kmsan_handle_i2c_transfer(struct i2c_msg *msgs, int num) +{ + int i; + + if (!msgs) + return; + for (i =3D 0; i < num; i++) { + if (msgs[i].flags & I2C_M_RD) + kmsan_internal_unpoison_shadow(msgs[i].buf, + msgs[i].len, + /*checked*/false); + else + kmsan_internal_check_memory(msgs[i].buf, msgs[i].len, + /*user_addr*/0, + REASON_ANY); + } +} diff --git a/mm/kmsan/kmsan_init.c b/mm/kmsan/kmsan_init.c new file mode 100644 index 000000000000..2816e7075a30 --- /dev/null +++ b/mm/kmsan/kmsan_init.c @@ -0,0 +1,88 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN initialization routines. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include "kmsan.h" + +#include +#include +#include + +#define NUM_FUTURE_RANGES 128 +struct start_end_pair { + void *start, *end; +}; + +static struct start_end_pair start_end_pairs[NUM_FUTURE_RANGES] __initdata= ; +static int future_index __initdata; + +/* + * Record a range of memory for which the metadata pages will be created o= nce + * the page allocator becomes available. + * TODO(glider): squash together ranges belonging to the same page. + */ +static void __init kmsan_record_future_shadow_range(void *start, void *end= ) +{ + BUG_ON(future_index =3D=3D NUM_FUTURE_RANGES); + BUG_ON((start >=3D end) || !start || !end); + start_end_pairs[future_index].start =3D start; + start_end_pairs[future_index].end =3D end; + future_index++; +} + +extern char _sdata[], _edata[]; + + + +/* + * Initialize the shadow for existing mappings during kernel initializatio= n. + * These include kernel text/data sections, NODE_DATA and future ranges + * registered while creating other data (e.g. percpu). + * + * Allocations via memblock can be only done before slab is initialized. + */ +void __init kmsan_initialize_shadow(void) +{ + int nid; + u64 i; + const size_t nd_size =3D roundup(sizeof(pg_data_t), PAGE_SIZE); + phys_addr_t p_start, p_end; + + for_each_reserved_mem_region(i, &p_start, &p_end) { + kmsan_record_future_shadow_range(phys_to_virt(p_start), + phys_to_virt(p_end+1)); + } + /* Allocate shadow for .data */ + kmsan_record_future_shadow_range(_sdata, _edata); + + /* + * TODO(glider): alloc_node_data() in arch/x86/mm/numa.c uses + * sizeof(pg_data_t). + */ + for_each_online_node(nid) + kmsan_record_future_shadow_range( + NODE_DATA(nid), (char *)NODE_DATA(nid) + nd_size); + + for (i =3D 0; i < future_index; i++) + kmsan_init_alloc_meta_for_range(start_end_pairs[i].start, + start_end_pairs[i].end); +} +EXPORT_SYMBOL(kmsan_initialize_shadow); + +void __init kmsan_initialize(void) +{ + /* Assuming current is init_task */ + kmsan_internal_task_create(current); + kmsan_pr_locked("Starting KernelMemorySanitizer\n"); + kmsan_ready =3D true; +} +EXPORT_SYMBOL(kmsan_initialize); diff --git a/mm/kmsan/kmsan_instr.c b/mm/kmsan/kmsan_instr.c new file mode 100644 index 000000000000..74cb7cee7f70 --- /dev/null +++ b/mm/kmsan/kmsan_instr.c @@ -0,0 +1,259 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN compiler API. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include "kmsan.h" +#include +#include + +static bool is_bad_asm_addr(void *addr, u64 size, bool is_store) +{ + if ((u64)addr < TASK_SIZE) + return true; + if (!kmsan_get_metadata(addr, size, META_SHADOW)) + return true; + return false; +} + +struct shadow_origin_ptr __msan_metadata_ptr_for_load_n(void *addr, u64 si= ze) +{ + return kmsan_get_shadow_origin_ptr(addr, size, /*store*/false); +} +EXPORT_SYMBOL(__msan_metadata_ptr_for_load_n); + +struct shadow_origin_ptr __msan_metadata_ptr_for_store_n(void *addr, u64 s= ize) +{ + return kmsan_get_shadow_origin_ptr(addr, size, /*store*/true); +} +EXPORT_SYMBOL(__msan_metadata_ptr_for_store_n); + +#define DECLARE_METADATA_PTR_GETTER(size) \ +struct shadow_origin_ptr __msan_metadata_ptr_for_load_##size(void *addr) \ +{ \ + return kmsan_get_shadow_origin_ptr(addr, size, /*store*/false); \ +} \ +EXPORT_SYMBOL(__msan_metadata_ptr_for_load_##size); \ + \ +struct shadow_origin_ptr __msan_metadata_ptr_for_store_##size(void *addr) = \ +{ \ + return kmsan_get_shadow_origin_ptr(addr, size, /*store*/true); \ +} \ +EXPORT_SYMBOL(__msan_metadata_ptr_for_store_##size) + +DECLARE_METADATA_PTR_GETTER(1); +DECLARE_METADATA_PTR_GETTER(2); +DECLARE_METADATA_PTR_GETTER(4); +DECLARE_METADATA_PTR_GETTER(8); + +void __msan_instrument_asm_store(void *addr, u64 size) +{ + unsigned long irq_flags; + + if (!kmsan_ready || IN_RUNTIME()) + return; + /* + * Most of the accesses are below 32 bytes. The two exceptions so far + * are clwb() (64 bytes) and FPU state (512 bytes). + * It's unlikely that the assembly will touch more than 512 bytes. + */ + if (size > 512) + size =3D 8; + if (is_bad_asm_addr(addr, size, /*is_store*/true)) + return; + ENTER_RUNTIME(irq_flags); + /* Unpoisoning the memory on best effort. */ + kmsan_internal_unpoison_shadow(addr, size, /*checked*/false); + LEAVE_RUNTIME(irq_flags); +} +EXPORT_SYMBOL(__msan_instrument_asm_store); + +void *__msan_memmove(void *dst, void *src, u64 n) +{ + void *result; + void *shadow_dst; + + result =3D __memmove(dst, src, n); + if (!n) + /* Some people call memmove() with zero length. */ + return result; + if (!kmsan_ready || IN_RUNTIME()) + return result; + + /* Ok to skip address check here, we'll do it later. */ + shadow_dst =3D kmsan_get_metadata(dst, n, META_SHADOW); + + if (!shadow_dst) + /* Can happen e.g. if the memory is untracked. */ + return result; + + kmsan_memmove_metadata(dst, src, n); + + return result; +} +EXPORT_SYMBOL(__msan_memmove); + +void *__msan_memmove_nosanitize(void *dst, void *src, u64 n) +{ + return __memmove(dst, src, n); +} +EXPORT_SYMBOL(__msan_memmove_nosanitize); + +void *__msan_memcpy(void *dst, const void *src, u64 n) +{ + void *result; + void *shadow_dst; + + result =3D __memcpy(dst, src, n); + if (!n) + /* Some people call memcpy() with zero length. */ + return result; + + if (!kmsan_ready || IN_RUNTIME()) + return result; + + /* Ok to skip address check here, we'll do it later. */ + shadow_dst =3D kmsan_get_metadata(dst, n, META_SHADOW); + if (!shadow_dst) + /* Can happen e.g. if the memory is untracked. */ + return result; + + kmsan_memcpy_metadata(dst, (void *)src, n); + + return result; +} +EXPORT_SYMBOL(__msan_memcpy); + +void *__msan_memcpy_nosanitize(void *dst, void *src, u64 n) +{ + return __memcpy(dst, src, n); +} +EXPORT_SYMBOL(__msan_memcpy_nosanitize); + +void *__msan_memset(void *dst, int c, size_t n) +{ + void *result; + unsigned long irq_flags; + depot_stack_handle_t new_origin; + unsigned int shadow; + + result =3D __memset(dst, c, n); + if (!kmsan_ready || IN_RUNTIME()) + return result; + + ENTER_RUNTIME(irq_flags); + shadow =3D 0; + kmsan_internal_memset_shadow(dst, shadow, n, /*checked*/false); + new_origin =3D 0; + kmsan_internal_set_origin(dst, n, new_origin); + LEAVE_RUNTIME(irq_flags); + + return result; +} +EXPORT_SYMBOL(__msan_memset); + +void *__msan_memset_nosanitize(void *dst, int c, size_t n) +{ + return __memset(dst, c, n); +} +EXPORT_SYMBOL(__msan_memset_nosanitize); + +depot_stack_handle_t __msan_chain_origin(depot_stack_handle_t origin) +{ + depot_stack_handle_t ret =3D 0; + unsigned long irq_flags; + + if (!kmsan_ready || IN_RUNTIME()) + return ret; + + /* Creating new origins may allocate memory. */ + ENTER_RUNTIME(irq_flags); + ret =3D kmsan_internal_chain_origin(origin); + LEAVE_RUNTIME(irq_flags); + return ret; +} +EXPORT_SYMBOL(__msan_chain_origin); + +void __msan_poison_alloca(void *address, u64 size, char *descr) +{ + depot_stack_handle_t handle; + unsigned long entries[4]; + unsigned long irq_flags; + u64 size_copy =3D size, to_fill; + u64 addr_copy =3D (u64)address; + u64 page_offset; + void *shadow_start; + + if (!kmsan_ready || IN_RUNTIME()) + return; + + while (size_copy) { + page_offset =3D addr_copy % PAGE_SIZE; + to_fill =3D min(PAGE_SIZE - page_offset, size_copy); + shadow_start =3D kmsan_get_metadata((void *)addr_copy, to_fill, + META_SHADOW); + if (!shadow_start) + /* Can happen e.g. if the memory is untracked. */ + continue; + __memset(shadow_start, -1, to_fill); + addr_copy +=3D to_fill; + size_copy -=3D to_fill; + } + + entries[0] =3D KMSAN_ALLOCA_MAGIC_ORIGIN; + entries[1] =3D (u64)descr; + entries[2] =3D (u64)__builtin_return_address(0); + entries[3] =3D (u64)kmsan_internal_return_address(1); + + /* stack_depot_save() may allocate memory. */ + ENTER_RUNTIME(irq_flags); + handle =3D stack_depot_save(entries, ARRAY_SIZE(entries), GFP_ATOMIC); + LEAVE_RUNTIME(irq_flags); + kmsan_internal_set_origin(address, size, handle); +} +EXPORT_SYMBOL(__msan_poison_alloca); + +void __msan_unpoison_alloca(void *address, u64 size) +{ + unsigned long irq_flags; + + if (!kmsan_ready || IN_RUNTIME()) + return; + + ENTER_RUNTIME(irq_flags); + /* Assuming the shadow exists. */ + kmsan_internal_unpoison_shadow(address, size, /*checked*/true); + LEAVE_RUNTIME(irq_flags); +} +EXPORT_SYMBOL(__msan_unpoison_alloca); + +void __msan_warning(u32 origin) +{ + unsigned long irq_flags; + + if (!kmsan_ready || IN_RUNTIME()) + return; + ENTER_RUNTIME(irq_flags); + kmsan_report(origin, /*address*/0, /*size*/0, + /*off_first*/0, /*off_last*/0, /*user_addr*/0, REASON_ANY); + LEAVE_RUNTIME(irq_flags); +} +EXPORT_SYMBOL(__msan_warning); + +struct kmsan_context_state *__msan_get_context_state(void) +{ + struct kmsan_context_state *ret; + + ret =3D task_kmsan_context_state(); + BUG_ON(!ret); + return ret; +} +EXPORT_SYMBOL(__msan_get_context_state); diff --git a/mm/kmsan/kmsan_report.c b/mm/kmsan/kmsan_report.c new file mode 100644 index 000000000000..443ab9c1e8bf --- /dev/null +++ b/mm/kmsan/kmsan_report.c @@ -0,0 +1,133 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN error reporting routines. + * + * Copyright (C) 2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include +#include +#include + +#include "kmsan.h" + +DEFINE_SPINLOCK(report_lock); + +void kmsan_print_origin(depot_stack_handle_t origin) +{ + unsigned long *entries =3D NULL, *chained_entries =3D NULL; + unsigned long nr_entries, chained_nr_entries, magic; + char *descr =3D NULL; + void *pc1 =3D NULL, *pc2 =3D NULL; + depot_stack_handle_t head; + + if (!origin) { + kmsan_pr_err("Origin not found, presumably a false report.\n"); + return; + } + + while (true) { + nr_entries =3D stack_depot_fetch(origin, &entries); + magic =3D nr_entries ? (entries[0] & KMSAN_MAGIC_MASK) : 0; + if ((nr_entries =3D=3D 4) && (magic =3D=3D KMSAN_ALLOCA_MAGIC_ORIGIN)) { + descr =3D (char *)entries[1]; + pc1 =3D (void *)entries[2]; + pc2 =3D (void *)entries[3]; + kmsan_pr_err("Local variable description: %s\n", descr); + kmsan_pr_err("Variable was created at:\n"); + kmsan_pr_err(" %pS\n", pc1); + kmsan_pr_err(" %pS\n", pc2); + break; + } + if ((nr_entries =3D=3D 3) && + (magic =3D=3D KMSAN_CHAIN_MAGIC_ORIGIN_FULL)) { + head =3D entries[1]; + origin =3D entries[2]; + kmsan_pr_err("Uninit was stored to memory at:\n"); + chained_nr_entries =3D + stack_depot_fetch(head, &chained_entries); + stack_trace_print(chained_entries, chained_nr_entries, + 0); + kmsan_pr_err("\n"); + continue; + } + kmsan_pr_err("Uninit was created at:\n"); + if (entries) + stack_trace_print(entries, nr_entries, 0); + else + kmsan_pr_err("No stack\n"); + break; + } +} + +void kmsan_report(depot_stack_handle_t origin, + void *address, int size, int off_first, int off_last, + const void *user_addr, int reason) +{ + unsigned long flags; + unsigned long *entries; + unsigned int nr_entries; + bool is_uaf =3D false; + char *bug_type =3D NULL; + + if (!kmsan_ready) + return; + if (!current->kmsan.allow_reporting) + return; + if (!origin) + return; + + nr_entries =3D stack_depot_fetch(origin, &entries); + + current->kmsan.allow_reporting =3D false; + spin_lock_irqsave(&report_lock, flags); + kmsan_pr_err("=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D\n"); + if (get_dsh_extra_bits(origin) & 1) + is_uaf =3D true; + switch (reason) { + case REASON_ANY: + bug_type =3D is_uaf ? "use-after-free" : "uninit-value"; + break; + case REASON_COPY_TO_USER: + bug_type =3D is_uaf ? "kernel-infoleak-after-free" : + "kernel-infoleak"; + break; + case REASON_SUBMIT_URB: + bug_type =3D is_uaf ? "kernel-usb-infoleak-after-free" : + "kernel-usb-infoleak"; + break; + } + kmsan_pr_err("BUG: KMSAN: %s in %pS\n", + bug_type, kmsan_internal_return_address(2)); + dump_stack(); + kmsan_pr_err("\n"); + + kmsan_print_origin(origin); + + if (size) { + kmsan_pr_err("\n"); + if (off_first =3D=3D off_last) + kmsan_pr_err("Byte %d of %d is uninitialized\n", + off_first, size); + else + kmsan_pr_err("Bytes %d-%d of %d are uninitialized\n", + off_first, off_last, size); + } + if (address) + kmsan_pr_err("Memory access of size %d starts at %px\n", + size, address); + if (user_addr && reason =3D=3D REASON_COPY_TO_USER) + kmsan_pr_err("Data copied to user address %px\n", user_addr); + kmsan_pr_err("=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D\n"); + add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); + spin_unlock_irqrestore(&report_lock, flags); + if (panic_on_warn) + panic("panic_on_warn set ...\n"); + current->kmsan.allow_reporting =3D true; +} diff --git a/mm/kmsan/kmsan_shadow.c b/mm/kmsan/kmsan_shadow.c new file mode 100644 index 000000000000..06801d76e6b8 --- /dev/null +++ b/mm/kmsan/kmsan_shadow.c @@ -0,0 +1,543 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN shadow implementation. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kmsan.h" +#include "kmsan_shadow.h" + +#define shadow_page_for(page) \ + ((page)->shadow) + +#define origin_page_for(page) \ + ((page)->origin) + +#define shadow_ptr_for(page) \ + (page_address((page)->shadow)) + +#define origin_ptr_for(page) \ + (page_address((page)->origin)) + +#define has_shadow_page(page) \ + (!!((page)->shadow)) + +#define has_origin_page(page) \ + (!!((page)->origin)) + +#define set_no_shadow_origin_page(page) \ + do { \ + (page)->shadow =3D NULL; \ + (page)->origin =3D NULL; \ + } while (0) /**/ + +#define is_ignored_page(page) \ + (!!(((u64)((page)->shadow)) % 2)) + +#define ignore_page(pg) \ + ((pg)->shadow =3D (struct page *)((u64)((pg)->shadow) | 1)) \ + +DEFINE_PER_CPU(char[CPU_ENTRY_AREA_SIZE], cpu_entry_area_shadow); +DEFINE_PER_CPU(char[CPU_ENTRY_AREA_SIZE], cpu_entry_area_origin); + +/* + * Dummy load and store pages to be used when the real metadata is unavail= able. + * There are separate pages for loads and stores, so that every load retur= ns a + * zero, and every store doesn't affect other stores. + */ +char dummy_load_page[PAGE_SIZE] __aligned(PAGE_SIZE); +char dummy_store_page[PAGE_SIZE] __aligned(PAGE_SIZE); + +/* + * Taken from arch/x86/mm/physaddr.h to avoid using an instrumented versio= n. + */ +static int kmsan_phys_addr_valid(unsigned long addr) +{ +#ifdef CONFIG_PHYS_ADDR_T_64BIT + return !(addr >> boot_cpu_data.x86_phys_bits); +#else + return 1; +#endif +} + +/* + * Taken from arch/x86/mm/physaddr.c to avoid using an instrumented versio= n. + */ +static bool kmsan_virt_addr_valid(void *addr) +{ + unsigned long x =3D (unsigned long)addr; + unsigned long y =3D x - __START_KERNEL_map; + + /* use the carry flag to determine if x was < __START_KERNEL_map */ + if (unlikely(x > y)) { + x =3D y + phys_base; + + if (y >=3D KERNEL_IMAGE_SIZE) + return false; + } else { + x =3D y + (__START_KERNEL_map - PAGE_OFFSET); + + /* carry flag will be set if starting x was >=3D PAGE_OFFSET */ + if ((x > y) || !kmsan_phys_addr_valid(x)) + return false; + } + + return pfn_valid(x >> PAGE_SHIFT); +} + +static unsigned long vmalloc_meta(void *addr, bool is_origin) +{ + unsigned long addr64 =3D (unsigned long)addr, off; + + BUG_ON(is_origin && !IS_ALIGNED(addr64, ORIGIN_SIZE)); + if (kmsan_internal_is_vmalloc_addr(addr)) { + return addr64 + (is_origin ? VMALLOC_ORIGIN_OFFSET + : VMALLOC_SHADOW_OFFSET); + } + if (kmsan_internal_is_module_addr(addr)) { + off =3D addr64 - MODULES_VADDR; + return off + (is_origin ? MODULES_ORIGIN_START + : MODULES_SHADOW_START); + } + return 0; +} + +static void *get_cea_meta_or_null(void *addr, bool is_origin) +{ + int cpu =3D smp_processor_id(); + int off; + char *metadata_array; + + if (((u64)addr < CPU_ENTRY_AREA_BASE) || + ((u64)addr >=3D (CPU_ENTRY_AREA_BASE + CPU_ENTRY_AREA_MAP_SIZE))) + return NULL; + off =3D (char *)addr - (char *)get_cpu_entry_area(cpu); + if ((off < 0) || (off >=3D CPU_ENTRY_AREA_SIZE)) + return NULL; + metadata_array =3D is_origin ? cpu_entry_area_origin : + cpu_entry_area_shadow; + return &per_cpu(metadata_array[off], cpu); +} + +static struct page *virt_to_page_or_null(void *vaddr) +{ + if (kmsan_virt_addr_valid(vaddr)) + return virt_to_page(vaddr); + else + return NULL; +} + +struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *address, u64 si= ze, + bool store) +{ + struct shadow_origin_ptr ret; + struct page *page; + u64 pad, offset, o_offset; + const u64 addr64 =3D (u64)address; + u64 o_addr64 =3D (u64)address; + void *shadow; + + if (size > PAGE_SIZE) { + WARN(1, "size too big in %s(%px, %d, %d)\n", + __func__, address, size, store); + BUG(); + } + if (store) { + ret.s =3D dummy_store_page; + ret.o =3D dummy_store_page; + } else { + ret.s =3D dummy_load_page; + ret.o =3D dummy_load_page; + } + if (!kmsan_ready || IN_RUNTIME()) + return ret; + BUG_ON(!metadata_is_contiguous(address, size, META_SHADOW)); + + if (!IS_ALIGNED(addr64, ORIGIN_SIZE)) { + pad =3D addr64 % ORIGIN_SIZE; + o_addr64 -=3D pad; + } + + if (kmsan_internal_is_vmalloc_addr(address) || + kmsan_internal_is_module_addr(address)) { + ret.s =3D (void *)vmalloc_meta(address, META_SHADOW); + ret.o =3D (void *)vmalloc_meta((void *)o_addr64, META_ORIGIN); + return ret; + } + + if (!kmsan_virt_addr_valid(address)) { + page =3D vmalloc_to_page_or_null(address); + if (page) + goto next; + shadow =3D get_cea_meta_or_null(address, META_SHADOW); + if (shadow) { + ret.s =3D shadow; + ret.o =3D get_cea_meta_or_null((void *)o_addr64, + META_ORIGIN); + return ret; + } + } + page =3D virt_to_page_or_null(address); + if (!page) + return ret; +next: + if (is_ignored_page(page)) + return ret; + + if (!has_shadow_page(page) || !has_origin_page(page)) + return ret; + offset =3D addr64 % PAGE_SIZE; + o_offset =3D o_addr64 % PAGE_SIZE; + + if (offset + size - 1 > PAGE_SIZE) { + /* + * The access overflows the current page and touches the + * subsequent ones. Make sure the shadow/origin pages are also + * consequent. + */ + BUG_ON(!metadata_is_contiguous(address, size, META_SHADOW)); + } + + ret.s =3D shadow_ptr_for(page) + offset; + ret.o =3D origin_ptr_for(page) + o_offset; + return ret; +} + +/* + * Obtain the shadow or origin pointer for the given address, or NULL if t= here's + * none. The caller must check the return value for being non-NULL if need= ed. + * The return value of this function should not depend on whether we're in= the + * runtime or not. + */ +void *kmsan_get_metadata(void *address, size_t size, bool is_origin) +{ + struct page *page; + void *ret; + u64 addr =3D (u64)address, pad, off; + + if (is_origin && !IS_ALIGNED(addr, ORIGIN_SIZE)) { + pad =3D addr % ORIGIN_SIZE; + addr -=3D pad; + size +=3D pad; + } + address =3D (void *)addr; + if (kmsan_internal_is_vmalloc_addr(address) || + kmsan_internal_is_module_addr(address)) { + return (void *)vmalloc_meta(address, is_origin); + } + + if (!kmsan_virt_addr_valid(address)) { + page =3D vmalloc_to_page_or_null(address); + if (page) + goto next; + ret =3D get_cea_meta_or_null(address, is_origin); + if (ret) + return ret; + } + page =3D virt_to_page_or_null(address); + if (!page) + return NULL; +next: + if (is_ignored_page(page)) + return NULL; + if (!has_shadow_page(page) || !has_origin_page(page)) + return NULL; + off =3D addr % PAGE_SIZE; + + ret =3D (is_origin ? origin_ptr_for(page) : shadow_ptr_for(page)) + off; + return ret; +} + +void __init kmsan_init_alloc_meta_for_range(void *start, void *end) +{ + u64 addr, size; + struct page *page; + void *shadow, *origin; + struct page *shadow_p, *origin_p; + + start =3D (void *)ALIGN_DOWN((u64)start, PAGE_SIZE); + size =3D ALIGN((u64)end - (u64)start, PAGE_SIZE); + shadow =3D memblock_alloc(size, PAGE_SIZE); + origin =3D memblock_alloc(size, PAGE_SIZE); + for (addr =3D 0; addr < size; addr +=3D PAGE_SIZE) { + page =3D virt_to_page_or_null((char *)start + addr); + shadow_p =3D virt_to_page_or_null((char *)shadow + addr); + set_no_shadow_origin_page(shadow_p); + shadow_page_for(page) =3D shadow_p; + origin_p =3D virt_to_page_or_null((char *)origin + addr); + set_no_shadow_origin_page(origin_p); + origin_page_for(page) =3D origin_p; + } +} + +/* Called from mm/memory.c */ +void kmsan_copy_page_meta(struct page *dst, struct page *src) +{ + unsigned long irq_flags; + + if (!kmsan_ready || IN_RUNTIME()) + return; + if (!has_shadow_page(src)) { + /* TODO(glider): are we leaking pages here? */ + set_no_shadow_origin_page(dst); + return; + } + if (!has_shadow_page(dst)) + return; + if (is_ignored_page(src)) { + ignore_page(dst); + return; + } + + ENTER_RUNTIME(irq_flags); + __memcpy(shadow_ptr_for(dst), shadow_ptr_for(src), + PAGE_SIZE); + BUG_ON(!has_origin_page(src) || !has_origin_page(dst)); + __memcpy(origin_ptr_for(dst), origin_ptr_for(src), + PAGE_SIZE); + LEAVE_RUNTIME(irq_flags); +} +EXPORT_SYMBOL(kmsan_copy_page_meta); + +/* Helper function to allocate page metadata. */ +static int kmsan_internal_alloc_meta_for_pages(struct page *page, + unsigned int order, + gfp_t flags, int node) +{ + struct page *shadow, *origin; + int pages =3D 1 << order; + int i; + bool initialized =3D (flags & __GFP_ZERO) || !kmsan_ready; + depot_stack_handle_t handle; + + if (flags & __GFP_NO_KMSAN_SHADOW) { + for (i =3D 0; i < pages; i++) + set_no_shadow_origin_page(&page[i]); + return 0; + } + + /* TODO(glider): must we override the flags? */ + flags =3D GFP_ATOMIC; + if (initialized) + flags |=3D __GFP_ZERO; + shadow =3D alloc_pages_node(node, flags | __GFP_NO_KMSAN_SHADOW, order); + if (!shadow) { + for (i =3D 0; i < pages; i++) { + set_no_shadow_origin_page(&page[i]); + set_no_shadow_origin_page(&page[i]); + } + return -ENOMEM; + } + if (!initialized) + __memset(page_address(shadow), -1, PAGE_SIZE * pages); + + origin =3D alloc_pages_node(node, flags | __GFP_NO_KMSAN_SHADOW, order); + /* Assume we've allocated the origin. */ + if (!origin) { + __free_pages(shadow, order); + for (i =3D 0; i < pages; i++) + set_no_shadow_origin_page(&page[i]); + return -ENOMEM; + } + + if (!initialized) { + handle =3D kmsan_save_stack_with_flags(flags, /*extra_bits*/0); + /* + * Addresses are page-aligned, pages are contiguous, so it's ok + * to just fill the origin pages with |handle|. + */ + for (i =3D 0; i < PAGE_SIZE * pages / sizeof(handle); i++) { + ((depot_stack_handle_t *)page_address(origin))[i] =3D + handle; + } + } + + for (i =3D 0; i < pages; i++) { + shadow_page_for(&page[i]) =3D &shadow[i]; + set_no_shadow_origin_page(shadow_page_for(&page[i])); + origin_page_for(&page[i]) =3D &origin[i]; + set_no_shadow_origin_page(origin_page_for(&page[i])); + } + return 0; +} + +/* Called from mm/page_alloc.c */ +int kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags) +{ + unsigned long irq_flags; + int ret; + + if (IN_RUNTIME()) + return 0; + ENTER_RUNTIME(irq_flags); + ret =3D kmsan_internal_alloc_meta_for_pages(page, order, flags, -1); + LEAVE_RUNTIME(irq_flags); + return ret; +} + +/* Called from mm/page_alloc.c */ +void kmsan_free_page(struct page *page, unsigned int order) +{ + struct page *shadow, *origin, *cur_page; + int pages =3D 1 << order; + int i; + unsigned long irq_flags; + + if (!shadow_page_for(page)) { + for (i =3D 0; i < pages; i++) { + cur_page =3D &page[i]; + BUG_ON(shadow_page_for(cur_page)); + } + return; + } + + if (!kmsan_ready) { + for (i =3D 0; i < pages; i++) { + cur_page =3D &page[i]; + set_no_shadow_origin_page(cur_page); + } + return; + } + + if (IN_RUNTIME()) { + /* + * TODO(glider): looks legit. depot_save_stack() may call + * free_pages(). + */ + return; + } + + ENTER_RUNTIME(irq_flags); + shadow =3D shadow_page_for(&page[0]); + origin =3D origin_page_for(&page[0]); + + /* TODO(glider): this is racy. */ + for (i =3D 0; i < pages; i++) { + BUG_ON(has_shadow_page(shadow_page_for(&page[i]))); + BUG_ON(has_shadow_page(origin_page_for(&page[i]))); + set_no_shadow_origin_page(&page[i]); + } + BUG_ON(has_shadow_page(shadow)); + __free_pages(shadow, order); + + BUG_ON(has_shadow_page(origin)); + __free_pages(origin, order); + LEAVE_RUNTIME(irq_flags); +} +EXPORT_SYMBOL(kmsan_free_page); + +/* Called from mm/page_alloc.c */ +void kmsan_split_page(struct page *page, unsigned int order) +{ + struct page *shadow, *origin; + unsigned long irq_flags; + + if (!kmsan_ready || IN_RUNTIME()) + return; + + ENTER_RUNTIME(irq_flags); + if (!has_shadow_page(&page[0])) { + BUG_ON(has_origin_page(&page[0])); + LEAVE_RUNTIME(irq_flags); + return; + } + shadow =3D shadow_page_for(&page[0]); + split_page(shadow, order); + + origin =3D origin_page_for(&page[0]); + split_page(origin, order); + LEAVE_RUNTIME(irq_flags); +} +EXPORT_SYMBOL(kmsan_split_page); + +/* Called from include/linux/highmem.h */ +void kmsan_clear_page(void *page_addr) +{ + struct page *page; + + if (!kmsan_ready || IN_RUNTIME()) + return; + BUG_ON(!IS_ALIGNED((u64)page_addr, PAGE_SIZE)); + page =3D vmalloc_to_page_or_null(page_addr); + if (!page) + page =3D virt_to_page_or_null(page_addr); + if (!page || !has_shadow_page(page)) + return; + __memset(shadow_ptr_for(page), 0, PAGE_SIZE); + BUG_ON(!has_origin_page(page)); + __memset(origin_ptr_for(page), 0, PAGE_SIZE); +} +EXPORT_SYMBOL(kmsan_clear_page); + +/* Called from mm/vmalloc.c */ +void kmsan_vmap_page_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages) +{ + int nr, i, mapped; + struct page **s_pages, **o_pages; + unsigned long shadow_start, shadow_end, origin_start, origin_end; + + if (!kmsan_ready || IN_RUNTIME()) + return; + shadow_start =3D vmalloc_meta((void *)start, META_SHADOW); + if (!shadow_start) + return; + + BUG_ON(start >=3D end); + nr =3D (end - start) / PAGE_SIZE; + s_pages =3D kcalloc(nr, sizeof(struct page *), GFP_KERNEL); + o_pages =3D kcalloc(nr, sizeof(struct page *), GFP_KERNEL); + if (!s_pages || !o_pages) + goto ret; + for (i =3D 0; i < nr; i++) { + s_pages[i] =3D shadow_page_for(pages[i]); + o_pages[i] =3D origin_page_for(pages[i]); + } + prot =3D __pgprot(pgprot_val(prot) | _PAGE_NX); + prot =3D PAGE_KERNEL; + + shadow_end =3D vmalloc_meta((void *)end, META_SHADOW); + origin_start =3D vmalloc_meta((void *)start, META_ORIGIN); + origin_end =3D vmalloc_meta((void *)end, META_ORIGIN); + mapped =3D __vmap_page_range_noflush(shadow_start, shadow_end, + prot, s_pages); + BUG_ON(mapped !=3D nr); + flush_tlb_kernel_range(shadow_start, shadow_end); + mapped =3D __vmap_page_range_noflush(origin_start, origin_end, + prot, o_pages); + BUG_ON(mapped !=3D nr); + flush_tlb_kernel_range(origin_start, origin_end); +ret: + kfree(s_pages); + kfree(o_pages); +} + +void kmsan_ignore_page(struct page *page, int order) +{ + int pages =3D 1 << order; + int i; + struct page *cp; + + for (i =3D 0; i < pages; i++) { + cp =3D &page[i]; + ignore_page(cp); + } +} diff --git a/mm/kmsan/kmsan_shadow.h b/mm/kmsan/kmsan_shadow.h new file mode 100644 index 000000000000..eaa7f771b6a5 --- /dev/null +++ b/mm/kmsan/kmsan_shadow.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KMSAN shadow API. + * + * This should be agnostic to shadow implementation details. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#ifndef __MM_KMSAN_KMSAN_SHADOW_H +#define __MM_KMSAN_KMSAN_SHADOW_H + +#include /* for CPU_ENTRY_AREA_MAP_SIZE */ + +struct shadow_origin_ptr { + void *s, *o; +}; + +struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *addr, u64 size, + bool store); +void *kmsan_get_metadata(void *addr, size_t size, bool is_origin); +void __init kmsan_init_alloc_meta_for_range(void *start, void *end); + +#endif /* __MM_KMSAN_KMSAN_SHADOW_H */ diff --git a/scripts/Makefile.kmsan b/scripts/Makefile.kmsan new file mode 100644 index 000000000000..8b3844b66b22 --- /dev/null +++ b/scripts/Makefile.kmsan @@ -0,0 +1,12 @@ +ifdef CONFIG_KMSAN + +CFLAGS_KMSAN :=3D -fsanitize=3Dkernel-memory + +ifeq ($(call cc-option, $(CFLAGS_KMSAN) -Werror),) + ifneq ($(CONFIG_COMPILE_TEST),y) + $(warning Cannot use CONFIG_KMSAN: \ + -fsanitize=3Dkernel-memory is not supported by compiler) + endif +endif + +endif --=20 2.24.0.rc0.303.g954a862665-goog